Oct  8 05:00:43 np0005475493 kernel: Linux version 5.14.0-620.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025
Oct  8 05:00:43 np0005475493 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct  8 05:00:43 np0005475493 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  8 05:00:43 np0005475493 kernel: BIOS-provided physical RAM map:
Oct  8 05:00:43 np0005475493 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct  8 05:00:43 np0005475493 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct  8 05:00:43 np0005475493 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct  8 05:00:43 np0005475493 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct  8 05:00:43 np0005475493 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct  8 05:00:43 np0005475493 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct  8 05:00:43 np0005475493 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct  8 05:00:43 np0005475493 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct  8 05:00:43 np0005475493 kernel: NX (Execute Disable) protection: active
Oct  8 05:00:43 np0005475493 kernel: APIC: Static calls initialized
Oct  8 05:00:43 np0005475493 kernel: SMBIOS 2.8 present.
Oct  8 05:00:43 np0005475493 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct  8 05:00:43 np0005475493 kernel: Hypervisor detected: KVM
Oct  8 05:00:43 np0005475493 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct  8 05:00:43 np0005475493 kernel: kvm-clock: using sched offset of 4135282089 cycles
Oct  8 05:00:43 np0005475493 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct  8 05:00:43 np0005475493 kernel: tsc: Detected 2800.000 MHz processor
Oct  8 05:00:43 np0005475493 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct  8 05:00:43 np0005475493 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct  8 05:00:43 np0005475493 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct  8 05:00:43 np0005475493 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct  8 05:00:43 np0005475493 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct  8 05:00:43 np0005475493 kernel: Using GB pages for direct mapping
Oct  8 05:00:43 np0005475493 kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct  8 05:00:43 np0005475493 kernel: ACPI: Early table checksum verification disabled
Oct  8 05:00:43 np0005475493 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct  8 05:00:43 np0005475493 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  8 05:00:43 np0005475493 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  8 05:00:43 np0005475493 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  8 05:00:43 np0005475493 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct  8 05:00:43 np0005475493 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  8 05:00:43 np0005475493 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  8 05:00:43 np0005475493 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct  8 05:00:43 np0005475493 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct  8 05:00:43 np0005475493 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct  8 05:00:43 np0005475493 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct  8 05:00:43 np0005475493 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct  8 05:00:43 np0005475493 kernel: No NUMA configuration found
Oct  8 05:00:43 np0005475493 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct  8 05:00:43 np0005475493 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct  8 05:00:43 np0005475493 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct  8 05:00:43 np0005475493 kernel: Zone ranges:
Oct  8 05:00:43 np0005475493 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct  8 05:00:43 np0005475493 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct  8 05:00:43 np0005475493 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct  8 05:00:43 np0005475493 kernel:  Device   empty
Oct  8 05:00:43 np0005475493 kernel: Movable zone start for each node
Oct  8 05:00:43 np0005475493 kernel: Early memory node ranges
Oct  8 05:00:43 np0005475493 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct  8 05:00:43 np0005475493 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct  8 05:00:43 np0005475493 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct  8 05:00:43 np0005475493 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct  8 05:00:43 np0005475493 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct  8 05:00:43 np0005475493 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct  8 05:00:43 np0005475493 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct  8 05:00:43 np0005475493 kernel: ACPI: PM-Timer IO Port: 0x608
Oct  8 05:00:43 np0005475493 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct  8 05:00:43 np0005475493 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct  8 05:00:43 np0005475493 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct  8 05:00:43 np0005475493 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct  8 05:00:43 np0005475493 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct  8 05:00:43 np0005475493 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct  8 05:00:43 np0005475493 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct  8 05:00:43 np0005475493 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct  8 05:00:43 np0005475493 kernel: TSC deadline timer available
Oct  8 05:00:43 np0005475493 kernel: CPU topo: Max. logical packages:   8
Oct  8 05:00:43 np0005475493 kernel: CPU topo: Max. logical dies:       8
Oct  8 05:00:43 np0005475493 kernel: CPU topo: Max. dies per package:   1
Oct  8 05:00:43 np0005475493 kernel: CPU topo: Max. threads per core:   1
Oct  8 05:00:43 np0005475493 kernel: CPU topo: Num. cores per package:     1
Oct  8 05:00:43 np0005475493 kernel: CPU topo: Num. threads per package:   1
Oct  8 05:00:43 np0005475493 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct  8 05:00:43 np0005475493 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct  8 05:00:43 np0005475493 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct  8 05:00:43 np0005475493 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct  8 05:00:43 np0005475493 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct  8 05:00:43 np0005475493 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct  8 05:00:43 np0005475493 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct  8 05:00:43 np0005475493 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct  8 05:00:43 np0005475493 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct  8 05:00:43 np0005475493 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct  8 05:00:43 np0005475493 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct  8 05:00:43 np0005475493 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct  8 05:00:43 np0005475493 kernel: Booting paravirtualized kernel on KVM
Oct  8 05:00:43 np0005475493 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct  8 05:00:43 np0005475493 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct  8 05:00:43 np0005475493 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct  8 05:00:43 np0005475493 kernel: kvm-guest: PV spinlocks disabled, no host support
Oct  8 05:00:43 np0005475493 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  8 05:00:43 np0005475493 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct  8 05:00:43 np0005475493 kernel: random: crng init done
Oct  8 05:00:43 np0005475493 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct  8 05:00:43 np0005475493 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct  8 05:00:43 np0005475493 kernel: Fallback order for Node 0: 0 
Oct  8 05:00:43 np0005475493 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct  8 05:00:43 np0005475493 kernel: Policy zone: Normal
Oct  8 05:00:43 np0005475493 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct  8 05:00:43 np0005475493 kernel: software IO TLB: area num 8.
Oct  8 05:00:43 np0005475493 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct  8 05:00:43 np0005475493 kernel: ftrace: allocating 49370 entries in 193 pages
Oct  8 05:00:43 np0005475493 kernel: ftrace: allocated 193 pages with 3 groups
Oct  8 05:00:43 np0005475493 kernel: Dynamic Preempt: voluntary
Oct  8 05:00:43 np0005475493 kernel: rcu: Preemptible hierarchical RCU implementation.
Oct  8 05:00:43 np0005475493 kernel: rcu: #011RCU event tracing is enabled.
Oct  8 05:00:43 np0005475493 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct  8 05:00:43 np0005475493 kernel: #011Trampoline variant of Tasks RCU enabled.
Oct  8 05:00:43 np0005475493 kernel: #011Rude variant of Tasks RCU enabled.
Oct  8 05:00:43 np0005475493 kernel: #011Tracing variant of Tasks RCU enabled.
Oct  8 05:00:43 np0005475493 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct  8 05:00:43 np0005475493 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct  8 05:00:43 np0005475493 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  8 05:00:43 np0005475493 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  8 05:00:43 np0005475493 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  8 05:00:43 np0005475493 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct  8 05:00:43 np0005475493 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct  8 05:00:43 np0005475493 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct  8 05:00:43 np0005475493 kernel: Console: colour VGA+ 80x25
Oct  8 05:00:43 np0005475493 kernel: printk: console [ttyS0] enabled
Oct  8 05:00:43 np0005475493 kernel: ACPI: Core revision 20230331
Oct  8 05:00:43 np0005475493 kernel: APIC: Switch to symmetric I/O mode setup
Oct  8 05:00:43 np0005475493 kernel: x2apic enabled
Oct  8 05:00:43 np0005475493 kernel: APIC: Switched APIC routing to: physical x2apic
Oct  8 05:00:43 np0005475493 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct  8 05:00:43 np0005475493 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Oct  8 05:00:43 np0005475493 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct  8 05:00:43 np0005475493 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct  8 05:00:43 np0005475493 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct  8 05:00:43 np0005475493 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct  8 05:00:43 np0005475493 kernel: Spectre V2 : Mitigation: Retpolines
Oct  8 05:00:43 np0005475493 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct  8 05:00:43 np0005475493 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct  8 05:00:43 np0005475493 kernel: RETBleed: Mitigation: untrained return thunk
Oct  8 05:00:43 np0005475493 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct  8 05:00:43 np0005475493 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct  8 05:00:43 np0005475493 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct  8 05:00:43 np0005475493 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct  8 05:00:43 np0005475493 kernel: x86/bugs: return thunk changed
Oct  8 05:00:43 np0005475493 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct  8 05:00:43 np0005475493 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct  8 05:00:43 np0005475493 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct  8 05:00:43 np0005475493 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct  8 05:00:43 np0005475493 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct  8 05:00:43 np0005475493 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct  8 05:00:43 np0005475493 kernel: Freeing SMP alternatives memory: 40K
Oct  8 05:00:43 np0005475493 kernel: pid_max: default: 32768 minimum: 301
Oct  8 05:00:43 np0005475493 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct  8 05:00:43 np0005475493 kernel: landlock: Up and running.
Oct  8 05:00:43 np0005475493 kernel: Yama: becoming mindful.
Oct  8 05:00:43 np0005475493 kernel: SELinux:  Initializing.
Oct  8 05:00:43 np0005475493 kernel: LSM support for eBPF active
Oct  8 05:00:43 np0005475493 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  8 05:00:43 np0005475493 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  8 05:00:43 np0005475493 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct  8 05:00:43 np0005475493 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct  8 05:00:43 np0005475493 kernel: ... version:                0
Oct  8 05:00:43 np0005475493 kernel: ... bit width:              48
Oct  8 05:00:43 np0005475493 kernel: ... generic registers:      6
Oct  8 05:00:43 np0005475493 kernel: ... value mask:             0000ffffffffffff
Oct  8 05:00:43 np0005475493 kernel: ... max period:             00007fffffffffff
Oct  8 05:00:43 np0005475493 kernel: ... fixed-purpose events:   0
Oct  8 05:00:43 np0005475493 kernel: ... event mask:             000000000000003f
Oct  8 05:00:43 np0005475493 kernel: signal: max sigframe size: 1776
Oct  8 05:00:43 np0005475493 kernel: rcu: Hierarchical SRCU implementation.
Oct  8 05:00:43 np0005475493 kernel: rcu: #011Max phase no-delay instances is 400.
Oct  8 05:00:43 np0005475493 kernel: smp: Bringing up secondary CPUs ...
Oct  8 05:00:43 np0005475493 kernel: smpboot: x86: Booting SMP configuration:
Oct  8 05:00:43 np0005475493 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct  8 05:00:43 np0005475493 kernel: smp: Brought up 1 node, 8 CPUs
Oct  8 05:00:43 np0005475493 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Oct  8 05:00:43 np0005475493 kernel: node 0 deferred pages initialised in 25ms
Oct  8 05:00:43 np0005475493 kernel: Memory: 7765352K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 616504K reserved, 0K cma-reserved)
Oct  8 05:00:43 np0005475493 kernel: devtmpfs: initialized
Oct  8 05:00:43 np0005475493 kernel: x86/mm: Memory block size: 128MB
Oct  8 05:00:43 np0005475493 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct  8 05:00:43 np0005475493 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct  8 05:00:43 np0005475493 kernel: pinctrl core: initialized pinctrl subsystem
Oct  8 05:00:43 np0005475493 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct  8 05:00:43 np0005475493 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct  8 05:00:43 np0005475493 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct  8 05:00:43 np0005475493 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct  8 05:00:43 np0005475493 kernel: audit: initializing netlink subsys (disabled)
Oct  8 05:00:43 np0005475493 kernel: audit: type=2000 audit(1759914042.174:1): state=initialized audit_enabled=0 res=1
Oct  8 05:00:43 np0005475493 kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct  8 05:00:43 np0005475493 kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct  8 05:00:43 np0005475493 kernel: thermal_sys: Registered thermal governor 'user_space'
Oct  8 05:00:43 np0005475493 kernel: cpuidle: using governor menu
Oct  8 05:00:43 np0005475493 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct  8 05:00:43 np0005475493 kernel: PCI: Using configuration type 1 for base access
Oct  8 05:00:43 np0005475493 kernel: PCI: Using configuration type 1 for extended access
Oct  8 05:00:43 np0005475493 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct  8 05:00:43 np0005475493 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct  8 05:00:43 np0005475493 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct  8 05:00:43 np0005475493 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct  8 05:00:43 np0005475493 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct  8 05:00:43 np0005475493 kernel: Demotion targets for Node 0: null
Oct  8 05:00:43 np0005475493 kernel: cryptd: max_cpu_qlen set to 1000
Oct  8 05:00:43 np0005475493 kernel: ACPI: Added _OSI(Module Device)
Oct  8 05:00:43 np0005475493 kernel: ACPI: Added _OSI(Processor Device)
Oct  8 05:00:43 np0005475493 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct  8 05:00:43 np0005475493 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct  8 05:00:43 np0005475493 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct  8 05:00:43 np0005475493 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct  8 05:00:43 np0005475493 kernel: ACPI: Interpreter enabled
Oct  8 05:00:43 np0005475493 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct  8 05:00:43 np0005475493 kernel: ACPI: Using IOAPIC for interrupt routing
Oct  8 05:00:43 np0005475493 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct  8 05:00:43 np0005475493 kernel: PCI: Using E820 reservations for host bridge windows
Oct  8 05:00:43 np0005475493 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct  8 05:00:43 np0005475493 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct  8 05:00:43 np0005475493 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [3] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [4] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [5] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [6] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [7] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [8] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [9] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [10] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [11] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [12] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [13] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [14] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [15] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [16] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [17] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [18] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [19] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [20] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [21] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [22] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [23] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [24] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [25] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [26] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [27] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [28] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [29] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [30] registered
Oct  8 05:00:43 np0005475493 kernel: acpiphp: Slot [31] registered
Oct  8 05:00:43 np0005475493 kernel: PCI host bridge to bus 0000:00
Oct  8 05:00:43 np0005475493 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct  8 05:00:43 np0005475493 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct  8 05:00:43 np0005475493 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct  8 05:00:43 np0005475493 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct  8 05:00:43 np0005475493 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct  8 05:00:43 np0005475493 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct  8 05:00:43 np0005475493 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct  8 05:00:43 np0005475493 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct  8 05:00:43 np0005475493 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct  8 05:00:43 np0005475493 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct  8 05:00:43 np0005475493 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct  8 05:00:43 np0005475493 kernel: iommu: Default domain type: Translated
Oct  8 05:00:43 np0005475493 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct  8 05:00:43 np0005475493 kernel: SCSI subsystem initialized
Oct  8 05:00:43 np0005475493 kernel: ACPI: bus type USB registered
Oct  8 05:00:43 np0005475493 kernel: usbcore: registered new interface driver usbfs
Oct  8 05:00:43 np0005475493 kernel: usbcore: registered new interface driver hub
Oct  8 05:00:43 np0005475493 kernel: usbcore: registered new device driver usb
Oct  8 05:00:43 np0005475493 kernel: pps_core: LinuxPPS API ver. 1 registered
Oct  8 05:00:43 np0005475493 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct  8 05:00:43 np0005475493 kernel: PTP clock support registered
Oct  8 05:00:43 np0005475493 kernel: EDAC MC: Ver: 3.0.0
Oct  8 05:00:43 np0005475493 kernel: NetLabel: Initializing
Oct  8 05:00:43 np0005475493 kernel: NetLabel:  domain hash size = 128
Oct  8 05:00:43 np0005475493 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct  8 05:00:43 np0005475493 kernel: NetLabel:  unlabeled traffic allowed by default
Oct  8 05:00:43 np0005475493 kernel: PCI: Using ACPI for IRQ routing
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct  8 05:00:43 np0005475493 kernel: vgaarb: loaded
Oct  8 05:00:43 np0005475493 kernel: clocksource: Switched to clocksource kvm-clock
Oct  8 05:00:43 np0005475493 kernel: VFS: Disk quotas dquot_6.6.0
Oct  8 05:00:43 np0005475493 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct  8 05:00:43 np0005475493 kernel: pnp: PnP ACPI init
Oct  8 05:00:43 np0005475493 kernel: pnp: PnP ACPI: found 5 devices
Oct  8 05:00:43 np0005475493 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct  8 05:00:43 np0005475493 kernel: NET: Registered PF_INET protocol family
Oct  8 05:00:43 np0005475493 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct  8 05:00:43 np0005475493 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct  8 05:00:43 np0005475493 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct  8 05:00:43 np0005475493 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct  8 05:00:43 np0005475493 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct  8 05:00:43 np0005475493 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct  8 05:00:43 np0005475493 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct  8 05:00:43 np0005475493 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  8 05:00:43 np0005475493 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  8 05:00:43 np0005475493 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct  8 05:00:43 np0005475493 kernel: NET: Registered PF_XDP protocol family
Oct  8 05:00:43 np0005475493 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct  8 05:00:43 np0005475493 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct  8 05:00:43 np0005475493 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct  8 05:00:43 np0005475493 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct  8 05:00:43 np0005475493 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct  8 05:00:43 np0005475493 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct  8 05:00:43 np0005475493 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 102619 usecs
Oct  8 05:00:43 np0005475493 kernel: PCI: CLS 0 bytes, default 64
Oct  8 05:00:43 np0005475493 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct  8 05:00:43 np0005475493 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct  8 05:00:43 np0005475493 kernel: ACPI: bus type thunderbolt registered
Oct  8 05:00:43 np0005475493 kernel: Trying to unpack rootfs image as initramfs...
Oct  8 05:00:43 np0005475493 kernel: Initialise system trusted keyrings
Oct  8 05:00:43 np0005475493 kernel: Key type blacklist registered
Oct  8 05:00:43 np0005475493 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct  8 05:00:43 np0005475493 kernel: zbud: loaded
Oct  8 05:00:43 np0005475493 kernel: integrity: Platform Keyring initialized
Oct  8 05:00:43 np0005475493 kernel: integrity: Machine keyring initialized
Oct  8 05:00:43 np0005475493 kernel: Freeing initrd memory: 86104K
Oct  8 05:00:43 np0005475493 kernel: NET: Registered PF_ALG protocol family
Oct  8 05:00:43 np0005475493 kernel: xor: automatically using best checksumming function   avx       
Oct  8 05:00:43 np0005475493 kernel: Key type asymmetric registered
Oct  8 05:00:43 np0005475493 kernel: Asymmetric key parser 'x509' registered
Oct  8 05:00:43 np0005475493 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct  8 05:00:43 np0005475493 kernel: io scheduler mq-deadline registered
Oct  8 05:00:43 np0005475493 kernel: io scheduler kyber registered
Oct  8 05:00:43 np0005475493 kernel: io scheduler bfq registered
Oct  8 05:00:43 np0005475493 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct  8 05:00:43 np0005475493 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct  8 05:00:43 np0005475493 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct  8 05:00:43 np0005475493 kernel: ACPI: button: Power Button [PWRF]
Oct  8 05:00:43 np0005475493 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct  8 05:00:43 np0005475493 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct  8 05:00:43 np0005475493 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct  8 05:00:43 np0005475493 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct  8 05:00:43 np0005475493 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct  8 05:00:43 np0005475493 kernel: Non-volatile memory driver v1.3
Oct  8 05:00:43 np0005475493 kernel: rdac: device handler registered
Oct  8 05:00:43 np0005475493 kernel: hp_sw: device handler registered
Oct  8 05:00:43 np0005475493 kernel: emc: device handler registered
Oct  8 05:00:43 np0005475493 kernel: alua: device handler registered
Oct  8 05:00:43 np0005475493 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct  8 05:00:43 np0005475493 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct  8 05:00:43 np0005475493 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct  8 05:00:43 np0005475493 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct  8 05:00:43 np0005475493 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct  8 05:00:43 np0005475493 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct  8 05:00:43 np0005475493 kernel: usb usb1: Product: UHCI Host Controller
Oct  8 05:00:43 np0005475493 kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct  8 05:00:43 np0005475493 kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct  8 05:00:43 np0005475493 kernel: hub 1-0:1.0: USB hub found
Oct  8 05:00:43 np0005475493 kernel: hub 1-0:1.0: 2 ports detected
Oct  8 05:00:43 np0005475493 kernel: usbcore: registered new interface driver usbserial_generic
Oct  8 05:00:43 np0005475493 kernel: usbserial: USB Serial support registered for generic
Oct  8 05:00:43 np0005475493 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct  8 05:00:43 np0005475493 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct  8 05:00:43 np0005475493 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct  8 05:00:43 np0005475493 kernel: mousedev: PS/2 mouse device common for all mice
Oct  8 05:00:43 np0005475493 kernel: rtc_cmos 00:04: RTC can wake from S4
Oct  8 05:00:43 np0005475493 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct  8 05:00:43 np0005475493 kernel: rtc_cmos 00:04: registered as rtc0
Oct  8 05:00:43 np0005475493 kernel: rtc_cmos 00:04: setting system clock to 2025-10-08T09:00:42 UTC (1759914042)
Oct  8 05:00:43 np0005475493 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct  8 05:00:43 np0005475493 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct  8 05:00:43 np0005475493 kernel: hid: raw HID events driver (C) Jiri Kosina
Oct  8 05:00:43 np0005475493 kernel: usbcore: registered new interface driver usbhid
Oct  8 05:00:43 np0005475493 kernel: usbhid: USB HID core driver
Oct  8 05:00:43 np0005475493 kernel: drop_monitor: Initializing network drop monitor service
Oct  8 05:00:43 np0005475493 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct  8 05:00:43 np0005475493 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct  8 05:00:43 np0005475493 kernel: Initializing XFRM netlink socket
Oct  8 05:00:43 np0005475493 kernel: NET: Registered PF_INET6 protocol family
Oct  8 05:00:43 np0005475493 kernel: Segment Routing with IPv6
Oct  8 05:00:43 np0005475493 kernel: NET: Registered PF_PACKET protocol family
Oct  8 05:00:43 np0005475493 kernel: mpls_gso: MPLS GSO support
Oct  8 05:00:43 np0005475493 kernel: IPI shorthand broadcast: enabled
Oct  8 05:00:43 np0005475493 kernel: AVX2 version of gcm_enc/dec engaged.
Oct  8 05:00:43 np0005475493 kernel: AES CTR mode by8 optimization enabled
Oct  8 05:00:43 np0005475493 kernel: sched_clock: Marking stable (1241003392, 141840199)->(1497292525, -114448934)
Oct  8 05:00:43 np0005475493 kernel: registered taskstats version 1
Oct  8 05:00:43 np0005475493 kernel: Loading compiled-in X.509 certificates
Oct  8 05:00:43 np0005475493 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  8 05:00:43 np0005475493 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct  8 05:00:43 np0005475493 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct  8 05:00:43 np0005475493 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct  8 05:00:43 np0005475493 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct  8 05:00:43 np0005475493 kernel: Demotion targets for Node 0: null
Oct  8 05:00:43 np0005475493 kernel: page_owner is disabled
Oct  8 05:00:43 np0005475493 kernel: Key type .fscrypt registered
Oct  8 05:00:43 np0005475493 kernel: Key type fscrypt-provisioning registered
Oct  8 05:00:43 np0005475493 kernel: Key type big_key registered
Oct  8 05:00:43 np0005475493 kernel: Key type encrypted registered
Oct  8 05:00:43 np0005475493 kernel: ima: No TPM chip found, activating TPM-bypass!
Oct  8 05:00:43 np0005475493 kernel: Loading compiled-in module X.509 certificates
Oct  8 05:00:43 np0005475493 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  8 05:00:43 np0005475493 kernel: ima: Allocated hash algorithm: sha256
Oct  8 05:00:43 np0005475493 kernel: ima: No architecture policies found
Oct  8 05:00:43 np0005475493 kernel: evm: Initialising EVM extended attributes:
Oct  8 05:00:43 np0005475493 kernel: evm: security.selinux
Oct  8 05:00:43 np0005475493 kernel: evm: security.SMACK64 (disabled)
Oct  8 05:00:43 np0005475493 kernel: evm: security.SMACK64EXEC (disabled)
Oct  8 05:00:43 np0005475493 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct  8 05:00:43 np0005475493 kernel: evm: security.SMACK64MMAP (disabled)
Oct  8 05:00:43 np0005475493 kernel: evm: security.apparmor (disabled)
Oct  8 05:00:43 np0005475493 kernel: evm: security.ima
Oct  8 05:00:43 np0005475493 kernel: evm: security.capability
Oct  8 05:00:43 np0005475493 kernel: evm: HMAC attrs: 0x1
Oct  8 05:00:43 np0005475493 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct  8 05:00:43 np0005475493 kernel: Running certificate verification RSA selftest
Oct  8 05:00:43 np0005475493 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct  8 05:00:43 np0005475493 kernel: Running certificate verification ECDSA selftest
Oct  8 05:00:43 np0005475493 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct  8 05:00:43 np0005475493 kernel: clk: Disabling unused clocks
Oct  8 05:00:43 np0005475493 kernel: Freeing unused decrypted memory: 2028K
Oct  8 05:00:43 np0005475493 kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct  8 05:00:43 np0005475493 kernel: Write protecting the kernel read-only data: 30720k
Oct  8 05:00:43 np0005475493 kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct  8 05:00:43 np0005475493 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct  8 05:00:43 np0005475493 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct  8 05:00:43 np0005475493 kernel: usb 1-1: Product: QEMU USB Tablet
Oct  8 05:00:43 np0005475493 kernel: usb 1-1: Manufacturer: QEMU
Oct  8 05:00:43 np0005475493 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct  8 05:00:43 np0005475493 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct  8 05:00:43 np0005475493 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct  8 05:00:43 np0005475493 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct  8 05:00:43 np0005475493 kernel: Run /init as init process
Oct  8 05:00:43 np0005475493 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  8 05:00:43 np0005475493 systemd: Detected virtualization kvm.
Oct  8 05:00:43 np0005475493 systemd: Detected architecture x86-64.
Oct  8 05:00:43 np0005475493 systemd: Running in initrd.
Oct  8 05:00:43 np0005475493 systemd: No hostname configured, using default hostname.
Oct  8 05:00:43 np0005475493 systemd: Hostname set to <localhost>.
Oct  8 05:00:43 np0005475493 systemd: Initializing machine ID from VM UUID.
Oct  8 05:00:43 np0005475493 systemd: Queued start job for default target Initrd Default Target.
Oct  8 05:00:43 np0005475493 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  8 05:00:43 np0005475493 systemd: Reached target Local Encrypted Volumes.
Oct  8 05:00:43 np0005475493 systemd: Reached target Initrd /usr File System.
Oct  8 05:00:43 np0005475493 systemd: Reached target Local File Systems.
Oct  8 05:00:43 np0005475493 systemd: Reached target Path Units.
Oct  8 05:00:43 np0005475493 systemd: Reached target Slice Units.
Oct  8 05:00:43 np0005475493 systemd: Reached target Swaps.
Oct  8 05:00:43 np0005475493 systemd: Reached target Timer Units.
Oct  8 05:00:43 np0005475493 systemd: Listening on D-Bus System Message Bus Socket.
Oct  8 05:00:43 np0005475493 systemd: Listening on Journal Socket (/dev/log).
Oct  8 05:00:43 np0005475493 systemd: Listening on Journal Socket.
Oct  8 05:00:43 np0005475493 systemd: Listening on udev Control Socket.
Oct  8 05:00:43 np0005475493 systemd: Listening on udev Kernel Socket.
Oct  8 05:00:43 np0005475493 systemd: Reached target Socket Units.
Oct  8 05:00:43 np0005475493 systemd: Starting Create List of Static Device Nodes...
Oct  8 05:00:43 np0005475493 systemd: Starting Journal Service...
Oct  8 05:00:43 np0005475493 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  8 05:00:43 np0005475493 systemd: Starting Apply Kernel Variables...
Oct  8 05:00:43 np0005475493 systemd: Starting Create System Users...
Oct  8 05:00:43 np0005475493 systemd: Starting Setup Virtual Console...
Oct  8 05:00:43 np0005475493 systemd: Finished Create List of Static Device Nodes.
Oct  8 05:00:43 np0005475493 systemd: Finished Apply Kernel Variables.
Oct  8 05:00:43 np0005475493 systemd: Finished Create System Users.
Oct  8 05:00:43 np0005475493 systemd-journald[308]: Journal started
Oct  8 05:00:43 np0005475493 systemd-journald[308]: Runtime Journal (/run/log/journal/a1287f1c59814c2ea0ce6a9c84016045) is 8.0M, max 153.5M, 145.5M free.
Oct  8 05:00:43 np0005475493 systemd-sysusers[312]: Creating group 'users' with GID 100.
Oct  8 05:00:43 np0005475493 systemd-sysusers[312]: Creating group 'dbus' with GID 81.
Oct  8 05:00:43 np0005475493 systemd-sysusers[312]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct  8 05:00:43 np0005475493 systemd: Started Journal Service.
Oct  8 05:00:43 np0005475493 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  8 05:00:43 np0005475493 systemd[1]: Starting Create Volatile Files and Directories...
Oct  8 05:00:43 np0005475493 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  8 05:00:43 np0005475493 systemd[1]: Finished Create Volatile Files and Directories.
Oct  8 05:00:43 np0005475493 systemd[1]: Finished Setup Virtual Console.
Oct  8 05:00:43 np0005475493 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct  8 05:00:43 np0005475493 systemd[1]: Starting dracut cmdline hook...
Oct  8 05:00:43 np0005475493 dracut-cmdline[329]: dracut-9 dracut-057-102.git20250818.el9
Oct  8 05:00:43 np0005475493 dracut-cmdline[329]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  8 05:00:43 np0005475493 systemd[1]: Finished dracut cmdline hook.
Oct  8 05:00:43 np0005475493 systemd[1]: Starting dracut pre-udev hook...
Oct  8 05:00:43 np0005475493 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct  8 05:00:43 np0005475493 kernel: device-mapper: uevent: version 1.0.3
Oct  8 05:00:43 np0005475493 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct  8 05:00:43 np0005475493 kernel: RPC: Registered named UNIX socket transport module.
Oct  8 05:00:43 np0005475493 kernel: RPC: Registered udp transport module.
Oct  8 05:00:43 np0005475493 kernel: RPC: Registered tcp transport module.
Oct  8 05:00:43 np0005475493 kernel: RPC: Registered tcp-with-tls transport module.
Oct  8 05:00:43 np0005475493 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct  8 05:00:43 np0005475493 rpc.statd[446]: Version 2.5.4 starting
Oct  8 05:00:43 np0005475493 rpc.statd[446]: Initializing NSM state
Oct  8 05:00:43 np0005475493 rpc.idmapd[451]: Setting log level to 0
Oct  8 05:00:43 np0005475493 systemd[1]: Finished dracut pre-udev hook.
Oct  8 05:00:43 np0005475493 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  8 05:00:43 np0005475493 systemd-udevd[464]: Using default interface naming scheme 'rhel-9.0'.
Oct  8 05:00:43 np0005475493 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  8 05:00:43 np0005475493 systemd[1]: Starting dracut pre-trigger hook...
Oct  8 05:00:43 np0005475493 systemd[1]: Finished dracut pre-trigger hook.
Oct  8 05:00:44 np0005475493 systemd[1]: Starting Coldplug All udev Devices...
Oct  8 05:00:44 np0005475493 systemd[1]: Created slice Slice /system/modprobe.
Oct  8 05:00:44 np0005475493 systemd[1]: Starting Load Kernel Module configfs...
Oct  8 05:00:44 np0005475493 systemd[1]: Finished Coldplug All udev Devices.
Oct  8 05:00:44 np0005475493 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  8 05:00:44 np0005475493 systemd[1]: Finished Load Kernel Module configfs.
Oct  8 05:00:44 np0005475493 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  8 05:00:44 np0005475493 systemd[1]: Reached target Network.
Oct  8 05:00:44 np0005475493 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  8 05:00:44 np0005475493 systemd[1]: Starting dracut initqueue hook...
Oct  8 05:00:44 np0005475493 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct  8 05:00:44 np0005475493 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct  8 05:00:44 np0005475493 kernel: vda: vda1
Oct  8 05:00:44 np0005475493 systemd-udevd[486]: Network interface NamePolicy= disabled on kernel command line.
Oct  8 05:00:44 np0005475493 kernel: scsi host0: ata_piix
Oct  8 05:00:44 np0005475493 kernel: scsi host1: ata_piix
Oct  8 05:00:44 np0005475493 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct  8 05:00:44 np0005475493 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct  8 05:00:44 np0005475493 systemd[1]: Mounting Kernel Configuration File System...
Oct  8 05:00:44 np0005475493 systemd[1]: Mounted Kernel Configuration File System.
Oct  8 05:00:44 np0005475493 systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  8 05:00:44 np0005475493 systemd[1]: Reached target Initrd Root Device.
Oct  8 05:00:44 np0005475493 systemd[1]: Reached target System Initialization.
Oct  8 05:00:44 np0005475493 systemd[1]: Reached target Basic System.
Oct  8 05:00:44 np0005475493 kernel: ata1: found unknown device (class 0)
Oct  8 05:00:44 np0005475493 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct  8 05:00:44 np0005475493 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct  8 05:00:44 np0005475493 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct  8 05:00:44 np0005475493 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct  8 05:00:44 np0005475493 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct  8 05:00:44 np0005475493 systemd[1]: Finished dracut initqueue hook.
Oct  8 05:00:44 np0005475493 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  8 05:00:44 np0005475493 systemd[1]: Reached target Remote Encrypted Volumes.
Oct  8 05:00:44 np0005475493 systemd[1]: Reached target Remote File Systems.
Oct  8 05:00:44 np0005475493 systemd[1]: Starting dracut pre-mount hook...
Oct  8 05:00:44 np0005475493 systemd[1]: Finished dracut pre-mount hook.
Oct  8 05:00:44 np0005475493 systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct  8 05:00:44 np0005475493 systemd-fsck[558]: /usr/sbin/fsck.xfs: XFS file system.
Oct  8 05:00:44 np0005475493 systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  8 05:00:44 np0005475493 systemd[1]: Mounting /sysroot...
Oct  8 05:00:45 np0005475493 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct  8 05:00:45 np0005475493 kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct  8 05:00:45 np0005475493 kernel: XFS (vda1): Ending clean mount
Oct  8 05:00:45 np0005475493 systemd[1]: Mounted /sysroot.
Oct  8 05:00:45 np0005475493 systemd[1]: Reached target Initrd Root File System.
Oct  8 05:00:45 np0005475493 systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct  8 05:00:45 np0005475493 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct  8 05:00:45 np0005475493 systemd[1]: Reached target Initrd File Systems.
Oct  8 05:00:45 np0005475493 systemd[1]: Reached target Initrd Default Target.
Oct  8 05:00:45 np0005475493 systemd[1]: Starting dracut mount hook...
Oct  8 05:00:45 np0005475493 systemd[1]: Finished dracut mount hook.
Oct  8 05:00:45 np0005475493 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct  8 05:00:45 np0005475493 rpc.idmapd[451]: exiting on signal 15
Oct  8 05:00:45 np0005475493 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct  8 05:00:45 np0005475493 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped target Network.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped target Remote Encrypted Volumes.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped target Timer Units.
Oct  8 05:00:45 np0005475493 systemd[1]: dbus.socket: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: Closed D-Bus System Message Bus Socket.
Oct  8 05:00:45 np0005475493 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped target Initrd Default Target.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped target Basic System.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped target Initrd Root Device.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped target Initrd /usr File System.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped target Path Units.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped target Remote File Systems.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped target Preparation for Remote File Systems.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped target Slice Units.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped target Socket Units.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped target System Initialization.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped target Local File Systems.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped target Swaps.
Oct  8 05:00:45 np0005475493 systemd[1]: dracut-mount.service: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped dracut mount hook.
Oct  8 05:00:45 np0005475493 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped dracut pre-mount hook.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped target Local Encrypted Volumes.
Oct  8 05:00:45 np0005475493 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct  8 05:00:45 np0005475493 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped dracut initqueue hook.
Oct  8 05:00:45 np0005475493 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped Apply Kernel Variables.
Oct  8 05:00:45 np0005475493 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped Create Volatile Files and Directories.
Oct  8 05:00:45 np0005475493 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped Coldplug All udev Devices.
Oct  8 05:00:45 np0005475493 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped dracut pre-trigger hook.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct  8 05:00:45 np0005475493 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped Setup Virtual Console.
Oct  8 05:00:45 np0005475493 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct  8 05:00:45 np0005475493 systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct  8 05:00:45 np0005475493 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: Closed udev Control Socket.
Oct  8 05:00:45 np0005475493 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: Closed udev Kernel Socket.
Oct  8 05:00:45 np0005475493 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped dracut pre-udev hook.
Oct  8 05:00:45 np0005475493 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped dracut cmdline hook.
Oct  8 05:00:45 np0005475493 systemd[1]: Starting Cleanup udev Database...
Oct  8 05:00:45 np0005475493 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct  8 05:00:45 np0005475493 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped Create List of Static Device Nodes.
Oct  8 05:00:45 np0005475493 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: Stopped Create System Users.
Oct  8 05:00:45 np0005475493 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct  8 05:00:45 np0005475493 systemd[1]: Finished Cleanup udev Database.
Oct  8 05:00:45 np0005475493 systemd[1]: Reached target Switch Root.
Oct  8 05:00:45 np0005475493 systemd[1]: Starting Switch Root...
Oct  8 05:00:45 np0005475493 systemd[1]: Switching root.
Oct  8 05:00:45 np0005475493 systemd-journald[308]: Journal stopped
Oct  8 05:00:46 np0005475493 systemd-journald: Received SIGTERM from PID 1 (systemd).
Oct  8 05:00:46 np0005475493 kernel: audit: type=1404 audit(1759914045.768:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct  8 05:00:46 np0005475493 kernel: SELinux:  policy capability network_peer_controls=1
Oct  8 05:00:46 np0005475493 kernel: SELinux:  policy capability open_perms=1
Oct  8 05:00:46 np0005475493 kernel: SELinux:  policy capability extended_socket_class=1
Oct  8 05:00:46 np0005475493 kernel: SELinux:  policy capability always_check_network=0
Oct  8 05:00:46 np0005475493 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  8 05:00:46 np0005475493 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  8 05:00:46 np0005475493 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  8 05:00:46 np0005475493 kernel: audit: type=1403 audit(1759914045.931:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct  8 05:00:46 np0005475493 systemd: Successfully loaded SELinux policy in 167.858ms.
Oct  8 05:00:46 np0005475493 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.051ms.
Oct  8 05:00:46 np0005475493 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  8 05:00:46 np0005475493 systemd: Detected virtualization kvm.
Oct  8 05:00:46 np0005475493 systemd: Detected architecture x86-64.
Oct  8 05:00:46 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:00:46 np0005475493 systemd: initrd-switch-root.service: Deactivated successfully.
Oct  8 05:00:46 np0005475493 systemd: Stopped Switch Root.
Oct  8 05:00:46 np0005475493 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct  8 05:00:46 np0005475493 systemd: Created slice Slice /system/getty.
Oct  8 05:00:46 np0005475493 systemd: Created slice Slice /system/serial-getty.
Oct  8 05:00:46 np0005475493 systemd: Created slice Slice /system/sshd-keygen.
Oct  8 05:00:46 np0005475493 systemd: Created slice User and Session Slice.
Oct  8 05:00:46 np0005475493 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  8 05:00:46 np0005475493 systemd: Started Forward Password Requests to Wall Directory Watch.
Oct  8 05:00:46 np0005475493 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct  8 05:00:46 np0005475493 systemd: Reached target Local Encrypted Volumes.
Oct  8 05:00:46 np0005475493 systemd: Stopped target Switch Root.
Oct  8 05:00:46 np0005475493 systemd: Stopped target Initrd File Systems.
Oct  8 05:00:46 np0005475493 systemd: Stopped target Initrd Root File System.
Oct  8 05:00:46 np0005475493 systemd: Reached target Local Integrity Protected Volumes.
Oct  8 05:00:46 np0005475493 systemd: Reached target Path Units.
Oct  8 05:00:46 np0005475493 systemd: Reached target rpc_pipefs.target.
Oct  8 05:00:46 np0005475493 systemd: Reached target Slice Units.
Oct  8 05:00:46 np0005475493 systemd: Reached target Swaps.
Oct  8 05:00:46 np0005475493 systemd: Reached target Local Verity Protected Volumes.
Oct  8 05:00:46 np0005475493 systemd: Listening on RPCbind Server Activation Socket.
Oct  8 05:00:46 np0005475493 systemd: Reached target RPC Port Mapper.
Oct  8 05:00:46 np0005475493 systemd: Listening on Process Core Dump Socket.
Oct  8 05:00:46 np0005475493 systemd: Listening on initctl Compatibility Named Pipe.
Oct  8 05:00:46 np0005475493 systemd: Listening on udev Control Socket.
Oct  8 05:00:46 np0005475493 systemd: Listening on udev Kernel Socket.
Oct  8 05:00:46 np0005475493 systemd: Mounting Huge Pages File System...
Oct  8 05:00:46 np0005475493 systemd: Mounting POSIX Message Queue File System...
Oct  8 05:00:46 np0005475493 systemd: Mounting Kernel Debug File System...
Oct  8 05:00:46 np0005475493 systemd: Mounting Kernel Trace File System...
Oct  8 05:00:46 np0005475493 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  8 05:00:46 np0005475493 systemd: Starting Create List of Static Device Nodes...
Oct  8 05:00:46 np0005475493 systemd: Starting Load Kernel Module configfs...
Oct  8 05:00:46 np0005475493 systemd: Starting Load Kernel Module drm...
Oct  8 05:00:46 np0005475493 systemd: Starting Load Kernel Module efi_pstore...
Oct  8 05:00:46 np0005475493 systemd: Starting Load Kernel Module fuse...
Oct  8 05:00:46 np0005475493 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct  8 05:00:46 np0005475493 systemd: systemd-fsck-root.service: Deactivated successfully.
Oct  8 05:00:46 np0005475493 systemd: Stopped File System Check on Root Device.
Oct  8 05:00:46 np0005475493 systemd: Stopped Journal Service.
Oct  8 05:00:46 np0005475493 kernel: fuse: init (API version 7.37)
Oct  8 05:00:46 np0005475493 systemd: Starting Journal Service...
Oct  8 05:00:46 np0005475493 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  8 05:00:46 np0005475493 systemd: Starting Generate network units from Kernel command line...
Oct  8 05:00:46 np0005475493 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  8 05:00:46 np0005475493 systemd: Starting Remount Root and Kernel File Systems...
Oct  8 05:00:46 np0005475493 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct  8 05:00:46 np0005475493 systemd: Starting Apply Kernel Variables...
Oct  8 05:00:46 np0005475493 systemd: Starting Coldplug All udev Devices...
Oct  8 05:00:46 np0005475493 systemd: Mounted Huge Pages File System.
Oct  8 05:00:46 np0005475493 systemd: Mounted POSIX Message Queue File System.
Oct  8 05:00:46 np0005475493 systemd: Mounted Kernel Debug File System.
Oct  8 05:00:46 np0005475493 systemd: Mounted Kernel Trace File System.
Oct  8 05:00:46 np0005475493 systemd: Finished Create List of Static Device Nodes.
Oct  8 05:00:46 np0005475493 systemd-journald[678]: Journal started
Oct  8 05:00:46 np0005475493 systemd-journald[678]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct  8 05:00:46 np0005475493 systemd[1]: Queued start job for default target Multi-User System.
Oct  8 05:00:46 np0005475493 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct  8 05:00:46 np0005475493 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct  8 05:00:46 np0005475493 systemd: Started Journal Service.
Oct  8 05:00:46 np0005475493 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  8 05:00:46 np0005475493 systemd[1]: Finished Load Kernel Module configfs.
Oct  8 05:00:46 np0005475493 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct  8 05:00:46 np0005475493 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct  8 05:00:46 np0005475493 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct  8 05:00:46 np0005475493 systemd[1]: Finished Load Kernel Module fuse.
Oct  8 05:00:46 np0005475493 kernel: ACPI: bus type drm_connector registered
Oct  8 05:00:46 np0005475493 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct  8 05:00:46 np0005475493 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct  8 05:00:46 np0005475493 systemd[1]: Finished Load Kernel Module drm.
Oct  8 05:00:46 np0005475493 systemd[1]: Finished Generate network units from Kernel command line.
Oct  8 05:00:46 np0005475493 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct  8 05:00:46 np0005475493 systemd[1]: Finished Apply Kernel Variables.
Oct  8 05:00:46 np0005475493 systemd[1]: Mounting FUSE Control File System...
Oct  8 05:00:46 np0005475493 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  8 05:00:46 np0005475493 systemd[1]: Starting Rebuild Hardware Database...
Oct  8 05:00:46 np0005475493 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct  8 05:00:46 np0005475493 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct  8 05:00:46 np0005475493 systemd[1]: Starting Load/Save OS Random Seed...
Oct  8 05:00:46 np0005475493 systemd[1]: Starting Create System Users...
Oct  8 05:00:46 np0005475493 systemd[1]: Mounted FUSE Control File System.
Oct  8 05:00:46 np0005475493 systemd-journald[678]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct  8 05:00:46 np0005475493 systemd-journald[678]: Received client request to flush runtime journal.
Oct  8 05:00:46 np0005475493 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct  8 05:00:46 np0005475493 systemd[1]: Finished Load/Save OS Random Seed.
Oct  8 05:00:46 np0005475493 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  8 05:00:46 np0005475493 systemd[1]: Finished Create System Users.
Oct  8 05:00:46 np0005475493 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  8 05:00:46 np0005475493 systemd[1]: Finished Coldplug All udev Devices.
Oct  8 05:00:46 np0005475493 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  8 05:00:46 np0005475493 systemd[1]: Reached target Preparation for Local File Systems.
Oct  8 05:00:46 np0005475493 systemd[1]: Reached target Local File Systems.
Oct  8 05:00:46 np0005475493 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct  8 05:00:46 np0005475493 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct  8 05:00:46 np0005475493 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct  8 05:00:46 np0005475493 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct  8 05:00:46 np0005475493 systemd[1]: Starting Automatic Boot Loader Update...
Oct  8 05:00:46 np0005475493 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct  8 05:00:46 np0005475493 systemd[1]: Starting Create Volatile Files and Directories...
Oct  8 05:00:46 np0005475493 bootctl[697]: Couldn't find EFI system partition, skipping.
Oct  8 05:00:46 np0005475493 systemd[1]: Finished Automatic Boot Loader Update.
Oct  8 05:00:46 np0005475493 systemd[1]: Finished Create Volatile Files and Directories.
Oct  8 05:00:46 np0005475493 systemd[1]: Starting Security Auditing Service...
Oct  8 05:00:46 np0005475493 systemd[1]: Starting RPC Bind...
Oct  8 05:00:46 np0005475493 systemd[1]: Starting Rebuild Journal Catalog...
Oct  8 05:00:46 np0005475493 auditd[703]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct  8 05:00:46 np0005475493 auditd[703]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct  8 05:00:46 np0005475493 systemd[1]: Finished Rebuild Journal Catalog.
Oct  8 05:00:47 np0005475493 systemd[1]: Started RPC Bind.
Oct  8 05:00:47 np0005475493 augenrules[708]: /sbin/augenrules: No change
Oct  8 05:00:47 np0005475493 augenrules[723]: No rules
Oct  8 05:00:47 np0005475493 augenrules[723]: enabled 1
Oct  8 05:00:47 np0005475493 augenrules[723]: failure 1
Oct  8 05:00:47 np0005475493 augenrules[723]: pid 703
Oct  8 05:00:47 np0005475493 augenrules[723]: rate_limit 0
Oct  8 05:00:47 np0005475493 augenrules[723]: backlog_limit 8192
Oct  8 05:00:47 np0005475493 augenrules[723]: lost 0
Oct  8 05:00:47 np0005475493 augenrules[723]: backlog 3
Oct  8 05:00:47 np0005475493 augenrules[723]: backlog_wait_time 60000
Oct  8 05:00:47 np0005475493 augenrules[723]: backlog_wait_time_actual 0
Oct  8 05:00:47 np0005475493 augenrules[723]: enabled 1
Oct  8 05:00:47 np0005475493 augenrules[723]: failure 1
Oct  8 05:00:47 np0005475493 augenrules[723]: pid 703
Oct  8 05:00:47 np0005475493 augenrules[723]: rate_limit 0
Oct  8 05:00:47 np0005475493 augenrules[723]: backlog_limit 8192
Oct  8 05:00:47 np0005475493 augenrules[723]: lost 0
Oct  8 05:00:47 np0005475493 augenrules[723]: backlog 0
Oct  8 05:00:47 np0005475493 augenrules[723]: backlog_wait_time 60000
Oct  8 05:00:47 np0005475493 augenrules[723]: backlog_wait_time_actual 0
Oct  8 05:00:47 np0005475493 augenrules[723]: enabled 1
Oct  8 05:00:47 np0005475493 augenrules[723]: failure 1
Oct  8 05:00:47 np0005475493 augenrules[723]: pid 703
Oct  8 05:00:47 np0005475493 augenrules[723]: rate_limit 0
Oct  8 05:00:47 np0005475493 augenrules[723]: backlog_limit 8192
Oct  8 05:00:47 np0005475493 augenrules[723]: lost 0
Oct  8 05:00:47 np0005475493 augenrules[723]: backlog 2
Oct  8 05:00:47 np0005475493 augenrules[723]: backlog_wait_time 60000
Oct  8 05:00:47 np0005475493 augenrules[723]: backlog_wait_time_actual 0
Oct  8 05:00:47 np0005475493 systemd[1]: Started Security Auditing Service.
Oct  8 05:00:47 np0005475493 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct  8 05:00:47 np0005475493 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct  8 05:00:47 np0005475493 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct  8 05:00:47 np0005475493 systemd[1]: Finished Rebuild Hardware Database.
Oct  8 05:00:47 np0005475493 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  8 05:00:47 np0005475493 systemd[1]: Starting Update is Completed...
Oct  8 05:00:47 np0005475493 systemd[1]: Finished Update is Completed.
Oct  8 05:00:47 np0005475493 systemd-udevd[731]: Using default interface naming scheme 'rhel-9.0'.
Oct  8 05:00:47 np0005475493 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  8 05:00:47 np0005475493 systemd[1]: Reached target System Initialization.
Oct  8 05:00:47 np0005475493 systemd[1]: Started dnf makecache --timer.
Oct  8 05:00:47 np0005475493 systemd[1]: Started Daily rotation of log files.
Oct  8 05:00:47 np0005475493 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct  8 05:00:47 np0005475493 systemd[1]: Reached target Timer Units.
Oct  8 05:00:47 np0005475493 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct  8 05:00:47 np0005475493 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct  8 05:00:47 np0005475493 systemd[1]: Reached target Socket Units.
Oct  8 05:00:47 np0005475493 systemd[1]: Starting D-Bus System Message Bus...
Oct  8 05:00:47 np0005475493 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  8 05:00:47 np0005475493 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct  8 05:00:47 np0005475493 systemd[1]: Starting Load Kernel Module configfs...
Oct  8 05:00:47 np0005475493 systemd-udevd[750]: Network interface NamePolicy= disabled on kernel command line.
Oct  8 05:00:47 np0005475493 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  8 05:00:47 np0005475493 systemd[1]: Finished Load Kernel Module configfs.
Oct  8 05:00:47 np0005475493 systemd[1]: Started D-Bus System Message Bus.
Oct  8 05:00:47 np0005475493 systemd[1]: Reached target Basic System.
Oct  8 05:00:47 np0005475493 dbus-broker-lau[754]: Ready
Oct  8 05:00:47 np0005475493 systemd[1]: Starting NTP client/server...
Oct  8 05:00:47 np0005475493 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct  8 05:00:47 np0005475493 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct  8 05:00:47 np0005475493 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct  8 05:00:47 np0005475493 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct  8 05:00:47 np0005475493 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct  8 05:00:47 np0005475493 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct  8 05:00:47 np0005475493 systemd[1]: Starting IPv4 firewall with iptables...
Oct  8 05:00:47 np0005475493 chronyd[791]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  8 05:00:47 np0005475493 chronyd[791]: Loaded 0 symmetric keys
Oct  8 05:00:47 np0005475493 systemd[1]: Started irqbalance daemon.
Oct  8 05:00:47 np0005475493 chronyd[791]: Using right/UTC timezone to obtain leap second data
Oct  8 05:00:47 np0005475493 chronyd[791]: Loaded seccomp filter (level 2)
Oct  8 05:00:47 np0005475493 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct  8 05:00:47 np0005475493 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  8 05:00:47 np0005475493 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  8 05:00:47 np0005475493 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  8 05:00:47 np0005475493 systemd[1]: Reached target sshd-keygen.target.
Oct  8 05:00:47 np0005475493 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct  8 05:00:47 np0005475493 systemd[1]: Reached target User and Group Name Lookups.
Oct  8 05:00:47 np0005475493 systemd[1]: Starting User Login Management...
Oct  8 05:00:47 np0005475493 kernel: kvm_amd: TSC scaling supported
Oct  8 05:00:47 np0005475493 kernel: kvm_amd: Nested Virtualization enabled
Oct  8 05:00:47 np0005475493 kernel: kvm_amd: Nested Paging enabled
Oct  8 05:00:47 np0005475493 kernel: kvm_amd: LBR virtualization supported
Oct  8 05:00:47 np0005475493 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct  8 05:00:47 np0005475493 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct  8 05:00:47 np0005475493 kernel: Console: switching to colour dummy device 80x25
Oct  8 05:00:47 np0005475493 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct  8 05:00:47 np0005475493 kernel: [drm] features: -context_init
Oct  8 05:00:47 np0005475493 systemd[1]: Started NTP client/server.
Oct  8 05:00:47 np0005475493 kernel: [drm] number of scanouts: 1
Oct  8 05:00:47 np0005475493 kernel: [drm] number of cap sets: 0
Oct  8 05:00:47 np0005475493 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct  8 05:00:47 np0005475493 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct  8 05:00:47 np0005475493 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct  8 05:00:47 np0005475493 kernel: Console: switching to colour frame buffer device 128x48
Oct  8 05:00:47 np0005475493 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct  8 05:00:47 np0005475493 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct  8 05:00:47 np0005475493 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct  8 05:00:47 np0005475493 systemd-logind[798]: New seat seat0.
Oct  8 05:00:47 np0005475493 systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  8 05:00:47 np0005475493 systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  8 05:00:47 np0005475493 systemd[1]: Started User Login Management.
Oct  8 05:00:47 np0005475493 iptables.init[789]: iptables: Applying firewall rules: [  OK  ]
Oct  8 05:00:47 np0005475493 systemd[1]: Finished IPv4 firewall with iptables.
Oct  8 05:00:48 np0005475493 cloud-init[840]: Cloud-init v. 24.4-7.el9 running 'init-local' at Wed, 08 Oct 2025 09:00:48 +0000. Up 6.91 seconds.
Oct  8 05:00:48 np0005475493 systemd[1]: run-cloud\x2dinit-tmp-tmp8r7joyua.mount: Deactivated successfully.
Oct  8 05:00:48 np0005475493 systemd[1]: Starting Hostname Service...
Oct  8 05:00:48 np0005475493 systemd[1]: Started Hostname Service.
Oct  8 05:00:48 np0005475493 systemd-hostnamed[854]: Hostname set to <np0005475493.novalocal> (static)
Oct  8 05:00:48 np0005475493 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct  8 05:00:48 np0005475493 systemd[1]: Reached target Preparation for Network.
Oct  8 05:00:48 np0005475493 systemd[1]: Starting Network Manager...
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.8641] NetworkManager (version 1.54.1-1.el9) is starting... (boot:82191aaa-5b9a-46b2-ace7-0656efb209fc)
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.8653] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.8828] manager[0x55ef394f6080]: monitoring kernel firmware directory '/lib/firmware'.
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.8907] hostname: hostname: using hostnamed
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.8908] hostname: static hostname changed from (none) to "np0005475493.novalocal"
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.8918] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9103] manager[0x55ef394f6080]: rfkill: Wi-Fi hardware radio set enabled
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9106] manager[0x55ef394f6080]: rfkill: WWAN hardware radio set enabled
Oct  8 05:00:48 np0005475493 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9227] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9229] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9230] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9232] manager: Networking is enabled by state file
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9237] settings: Loaded settings plugin: keyfile (internal)
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9285] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9329] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9365] dhcp: init: Using DHCP client 'internal'
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9370] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  8 05:00:48 np0005475493 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9401] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9422] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9442] device (lo): Activation: starting connection 'lo' (04954bd0-4d1f-4562-9334-15a987bf371b)
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9463] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9471] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  8 05:00:48 np0005475493 systemd[1]: Started Network Manager.
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9525] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9534] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9539] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9544] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9548] device (eth0): carrier: link connected
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9556] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  8 05:00:48 np0005475493 systemd[1]: Reached target Network.
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9569] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9590] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9599] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9601] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9607] manager: NetworkManager state is now CONNECTING
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9611] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9625] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9632] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9657] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9660] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9667] device (lo): Activation: successful, device activated.
Oct  8 05:00:48 np0005475493 systemd[1]: Starting Network Manager Wait Online...
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9696] dhcp4 (eth0): state changed new lease, address=38.102.83.224
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9708] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  8 05:00:48 np0005475493 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9747] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  8 05:00:48 np0005475493 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9776] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9778] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9785] manager: NetworkManager state is now CONNECTED_SITE
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9796] device (eth0): Activation: successful, device activated.
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9803] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  8 05:00:48 np0005475493 NetworkManager[858]: <info>  [1759914048.9808] manager: startup complete
Oct  8 05:00:49 np0005475493 systemd[1]: Finished Network Manager Wait Online.
Oct  8 05:00:49 np0005475493 systemd[1]: Started GSSAPI Proxy Daemon.
Oct  8 05:00:49 np0005475493 systemd[1]: Starting Cloud-init: Network Stage...
Oct  8 05:00:49 np0005475493 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  8 05:00:49 np0005475493 systemd[1]: Reached target NFS client services.
Oct  8 05:00:49 np0005475493 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  8 05:00:49 np0005475493 systemd[1]: Reached target Remote File Systems.
Oct  8 05:00:49 np0005475493 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  8 05:00:49 np0005475493 cloud-init[921]: Cloud-init v. 24.4-7.el9 running 'init' at Wed, 08 Oct 2025 09:00:49 +0000. Up 8.00 seconds.
Oct  8 05:00:49 np0005475493 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct  8 05:00:49 np0005475493 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  8 05:00:49 np0005475493 cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct  8 05:00:49 np0005475493 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  8 05:00:49 np0005475493 cloud-init[921]: ci-info: |  eth0  | True |        38.102.83.224         | 255.255.255.0 | global | fa:16:3e:7c:7c:9b |
Oct  8 05:00:49 np0005475493 cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fe7c:7c9b/64 |       .       |  link  | fa:16:3e:7c:7c:9b |
Oct  8 05:00:49 np0005475493 cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct  8 05:00:49 np0005475493 cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct  8 05:00:49 np0005475493 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  8 05:00:49 np0005475493 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct  8 05:00:49 np0005475493 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct  8 05:00:49 np0005475493 cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Oct  8 05:00:49 np0005475493 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct  8 05:00:49 np0005475493 cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Oct  8 05:00:49 np0005475493 cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct  8 05:00:49 np0005475493 cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Oct  8 05:00:49 np0005475493 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct  8 05:00:49 np0005475493 cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct  8 05:00:49 np0005475493 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  8 05:00:49 np0005475493 cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct  8 05:00:49 np0005475493 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  8 05:00:49 np0005475493 cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct  8 05:00:49 np0005475493 cloud-init[921]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Oct  8 05:00:49 np0005475493 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  8 05:00:50 np0005475493 cloud-init[921]: Generating public/private rsa key pair.
Oct  8 05:00:50 np0005475493 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct  8 05:00:50 np0005475493 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct  8 05:00:50 np0005475493 cloud-init[921]: The key fingerprint is:
Oct  8 05:00:50 np0005475493 cloud-init[921]: SHA256:zkmyIan+dyRsgZ5A1mcR1AvqsuxRX5EYV9o9u87GRlc root@np0005475493.novalocal
Oct  8 05:00:50 np0005475493 cloud-init[921]: The key's randomart image is:
Oct  8 05:00:50 np0005475493 cloud-init[921]: +---[RSA 3072]----+
Oct  8 05:00:50 np0005475493 cloud-init[921]: |   . .=+...      |
Oct  8 05:00:50 np0005475493 cloud-init[921]: |  o . ++.+ .     |
Oct  8 05:00:50 np0005475493 cloud-init[921]: | o   =..+.. o    |
Oct  8 05:00:50 np0005475493 cloud-init[921]: |  . o.. ..   o  E|
Oct  8 05:00:50 np0005475493 cloud-init[921]: |   ++oo.S   .  . |
Oct  8 05:00:50 np0005475493 cloud-init[921]: |  .o+o+O..  ...  |
Oct  8 05:00:50 np0005475493 cloud-init[921]: | .oo .oo+  o..   |
Oct  8 05:00:50 np0005475493 cloud-init[921]: | .o.  . .  o+    |
Oct  8 05:00:50 np0005475493 cloud-init[921]: | .o... .   oo    |
Oct  8 05:00:50 np0005475493 cloud-init[921]: +----[SHA256]-----+
Oct  8 05:00:50 np0005475493 cloud-init[921]: Generating public/private ecdsa key pair.
Oct  8 05:00:50 np0005475493 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct  8 05:00:50 np0005475493 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct  8 05:00:50 np0005475493 cloud-init[921]: The key fingerprint is:
Oct  8 05:00:50 np0005475493 cloud-init[921]: SHA256:tdkYIjRLpGVMB4n+nOfKFTscibJeulw5MdgfTkVt90k root@np0005475493.novalocal
Oct  8 05:00:50 np0005475493 cloud-init[921]: The key's randomart image is:
Oct  8 05:00:50 np0005475493 cloud-init[921]: +---[ECDSA 256]---+
Oct  8 05:00:50 np0005475493 cloud-init[921]: |     =Xo. ..     |
Oct  8 05:00:50 np0005475493 cloud-init[921]: |    .*o+ .  o .E |
Oct  8 05:00:50 np0005475493 cloud-init[921]: |   .. o . +. ....|
Oct  8 05:00:50 np0005475493 cloud-init[921]: |    .o o = *   ..|
Oct  8 05:00:50 np0005475493 cloud-init[921]: |    oo=.S + .    |
Oct  8 05:00:50 np0005475493 cloud-init[921]: |     o+O.=       |
Oct  8 05:00:50 np0005475493 cloud-init[921]: |    . =oB        |
Oct  8 05:00:50 np0005475493 cloud-init[921]: |   o = o..       |
Oct  8 05:00:50 np0005475493 cloud-init[921]: |    =.o.         |
Oct  8 05:00:50 np0005475493 cloud-init[921]: +----[SHA256]-----+
Oct  8 05:00:50 np0005475493 cloud-init[921]: Generating public/private ed25519 key pair.
Oct  8 05:00:50 np0005475493 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct  8 05:00:50 np0005475493 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct  8 05:00:50 np0005475493 cloud-init[921]: The key fingerprint is:
Oct  8 05:00:50 np0005475493 cloud-init[921]: SHA256:tVYR/4SbaMQxTiQ1BXMWvmg17fFKyRWOinLym2kAuuE root@np0005475493.novalocal
Oct  8 05:00:50 np0005475493 cloud-init[921]: The key's randomart image is:
Oct  8 05:00:50 np0005475493 cloud-init[921]: +--[ED25519 256]--+
Oct  8 05:00:50 np0005475493 cloud-init[921]: |          .o%+=o |
Oct  8 05:00:50 np0005475493 cloud-init[921]: |           = @oo.|
Oct  8 05:00:50 np0005475493 cloud-init[921]: |          . =.*o+|
Oct  8 05:00:50 np0005475493 cloud-init[921]: |     .   ..+.= @o|
Oct  8 05:00:50 np0005475493 cloud-init[921]: |    . .oSoo.= B +|
Oct  8 05:00:50 np0005475493 cloud-init[921]: |   o   .=. o . . |
Oct  8 05:00:50 np0005475493 cloud-init[921]: |  . o   ..    .  |
Oct  8 05:00:50 np0005475493 cloud-init[921]: |   E     .+      |
Oct  8 05:00:50 np0005475493 cloud-init[921]: |        .+       |
Oct  8 05:00:50 np0005475493 cloud-init[921]: +----[SHA256]-----+
Oct  8 05:00:50 np0005475493 sm-notify[1004]: Version 2.5.4 starting
Oct  8 05:00:50 np0005475493 systemd[1]: Finished Cloud-init: Network Stage.
Oct  8 05:00:50 np0005475493 systemd[1]: Reached target Cloud-config availability.
Oct  8 05:00:50 np0005475493 systemd[1]: Reached target Network is Online.
Oct  8 05:00:50 np0005475493 systemd[1]: Starting Cloud-init: Config Stage...
Oct  8 05:00:50 np0005475493 systemd[1]: Starting Notify NFS peers of a restart...
Oct  8 05:00:50 np0005475493 systemd[1]: Starting System Logging Service...
Oct  8 05:00:50 np0005475493 systemd[1]: Starting OpenSSH server daemon...
Oct  8 05:00:50 np0005475493 systemd[1]: Starting Permit User Sessions...
Oct  8 05:00:50 np0005475493 systemd[1]: Started Notify NFS peers of a restart.
Oct  8 05:00:50 np0005475493 systemd[1]: Started OpenSSH server daemon.
Oct  8 05:00:50 np0005475493 systemd[1]: Finished Permit User Sessions.
Oct  8 05:00:50 np0005475493 systemd[1]: Started Command Scheduler.
Oct  8 05:00:50 np0005475493 systemd[1]: Started Getty on tty1.
Oct  8 05:00:50 np0005475493 systemd[1]: Started Serial Getty on ttyS0.
Oct  8 05:00:50 np0005475493 systemd[1]: Reached target Login Prompts.
Oct  8 05:00:50 np0005475493 systemd[1]: Started System Logging Service.
Oct  8 05:00:50 np0005475493 rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] start
Oct  8 05:00:50 np0005475493 rsyslogd[1005]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct  8 05:00:50 np0005475493 systemd[1]: Reached target Multi-User System.
Oct  8 05:00:50 np0005475493 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct  8 05:00:51 np0005475493 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct  8 05:00:51 np0005475493 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct  8 05:00:51 np0005475493 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  8 05:00:51 np0005475493 cloud-init[1035]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Wed, 08 Oct 2025 09:00:51 +0000. Up 9.93 seconds.
Oct  8 05:00:51 np0005475493 systemd[1]: Finished Cloud-init: Config Stage.
Oct  8 05:00:51 np0005475493 systemd[1]: Starting Cloud-init: Final Stage...
Oct  8 05:00:51 np0005475493 cloud-init[1039]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Wed, 08 Oct 2025 09:00:51 +0000. Up 10.37 seconds.
Oct  8 05:00:51 np0005475493 cloud-init[1041]: #############################################################
Oct  8 05:00:51 np0005475493 cloud-init[1042]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct  8 05:00:51 np0005475493 cloud-init[1044]: 256 SHA256:tdkYIjRLpGVMB4n+nOfKFTscibJeulw5MdgfTkVt90k root@np0005475493.novalocal (ECDSA)
Oct  8 05:00:51 np0005475493 cloud-init[1046]: 256 SHA256:tVYR/4SbaMQxTiQ1BXMWvmg17fFKyRWOinLym2kAuuE root@np0005475493.novalocal (ED25519)
Oct  8 05:00:51 np0005475493 cloud-init[1048]: 3072 SHA256:zkmyIan+dyRsgZ5A1mcR1AvqsuxRX5EYV9o9u87GRlc root@np0005475493.novalocal (RSA)
Oct  8 05:00:51 np0005475493 cloud-init[1049]: -----END SSH HOST KEY FINGERPRINTS-----
Oct  8 05:00:51 np0005475493 cloud-init[1050]: #############################################################
Oct  8 05:00:51 np0005475493 cloud-init[1039]: Cloud-init v. 24.4-7.el9 finished at Wed, 08 Oct 2025 09:00:51 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.58 seconds
Oct  8 05:00:51 np0005475493 systemd[1]: Finished Cloud-init: Final Stage.
Oct  8 05:00:51 np0005475493 systemd[1]: Reached target Cloud-init target.
Oct  8 05:00:51 np0005475493 systemd[1]: Startup finished in 1.654s (kernel) + 2.795s (initrd) + 6.225s (userspace) = 10.674s.
Oct  8 05:00:53 np0005475493 chronyd[791]: Selected source 162.159.200.1 (2.centos.pool.ntp.org)
Oct  8 05:00:53 np0005475493 chronyd[791]: System clock TAI offset set to 37 seconds
Oct  8 05:00:58 np0005475493 irqbalance[792]: Cannot change IRQ 35 affinity: Operation not permitted
Oct  8 05:00:58 np0005475493 irqbalance[792]: IRQ 35 affinity is now unmanaged
Oct  8 05:00:58 np0005475493 irqbalance[792]: Cannot change IRQ 33 affinity: Operation not permitted
Oct  8 05:00:58 np0005475493 irqbalance[792]: IRQ 33 affinity is now unmanaged
Oct  8 05:00:58 np0005475493 irqbalance[792]: Cannot change IRQ 31 affinity: Operation not permitted
Oct  8 05:00:58 np0005475493 irqbalance[792]: IRQ 31 affinity is now unmanaged
Oct  8 05:00:58 np0005475493 irqbalance[792]: Cannot change IRQ 28 affinity: Operation not permitted
Oct  8 05:00:58 np0005475493 irqbalance[792]: IRQ 28 affinity is now unmanaged
Oct  8 05:00:58 np0005475493 irqbalance[792]: Cannot change IRQ 34 affinity: Operation not permitted
Oct  8 05:00:58 np0005475493 irqbalance[792]: IRQ 34 affinity is now unmanaged
Oct  8 05:00:58 np0005475493 irqbalance[792]: Cannot change IRQ 32 affinity: Operation not permitted
Oct  8 05:00:58 np0005475493 irqbalance[792]: IRQ 32 affinity is now unmanaged
Oct  8 05:00:58 np0005475493 irqbalance[792]: Cannot change IRQ 30 affinity: Operation not permitted
Oct  8 05:00:58 np0005475493 irqbalance[792]: IRQ 30 affinity is now unmanaged
Oct  8 05:00:58 np0005475493 irqbalance[792]: Cannot change IRQ 29 affinity: Operation not permitted
Oct  8 05:00:58 np0005475493 irqbalance[792]: IRQ 29 affinity is now unmanaged
Oct  8 05:00:59 np0005475493 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  8 05:01:18 np0005475493 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  8 05:06:41 np0005475493 systemd[1]: Created slice User Slice of UID 1000.
Oct  8 05:06:41 np0005475493 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct  8 05:06:41 np0005475493 systemd-logind[798]: New session 1 of user zuul.
Oct  8 05:06:41 np0005475493 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct  8 05:06:41 np0005475493 systemd[1]: Starting User Manager for UID 1000...
Oct  8 05:06:42 np0005475493 systemd[1076]: Queued start job for default target Main User Target.
Oct  8 05:06:42 np0005475493 systemd[1076]: Created slice User Application Slice.
Oct  8 05:06:42 np0005475493 systemd[1076]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  8 05:06:42 np0005475493 systemd[1076]: Started Daily Cleanup of User's Temporary Directories.
Oct  8 05:06:42 np0005475493 systemd[1076]: Reached target Paths.
Oct  8 05:06:42 np0005475493 systemd[1076]: Reached target Timers.
Oct  8 05:06:42 np0005475493 systemd[1076]: Starting D-Bus User Message Bus Socket...
Oct  8 05:06:42 np0005475493 systemd[1076]: Starting Create User's Volatile Files and Directories...
Oct  8 05:06:42 np0005475493 systemd[1076]: Finished Create User's Volatile Files and Directories.
Oct  8 05:06:42 np0005475493 systemd[1076]: Listening on D-Bus User Message Bus Socket.
Oct  8 05:06:42 np0005475493 systemd[1076]: Reached target Sockets.
Oct  8 05:06:42 np0005475493 systemd[1076]: Reached target Basic System.
Oct  8 05:06:42 np0005475493 systemd[1076]: Reached target Main User Target.
Oct  8 05:06:42 np0005475493 systemd[1076]: Startup finished in 148ms.
Oct  8 05:06:42 np0005475493 systemd[1]: Started User Manager for UID 1000.
Oct  8 05:06:42 np0005475493 systemd[1]: Started Session 1 of User zuul.
Oct  8 05:06:42 np0005475493 python3[1160]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:06:45 np0005475493 python3[1188]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:06:53 np0005475493 python3[1246]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:06:54 np0005475493 python3[1286]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct  8 05:06:57 np0005475493 python3[1312]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUwaJLzYFiNMxkHUdiBe5nX2QD24WnDKKnH7pPHAe2hO1x3tFKdJakzS4Bfn+9WwnlXOTdyqf0G299I1IneRKu3lN8N3LECCnsTdRIJRu5V7vlSuDb2oOMllH6OwZOlpzosOkxzyaiTlCJ8EBGkWNVPZaggh5EfmAxs8MtYtZinH3BlIW1J+SNhG3E7vCYVwtBNTBCCOf8U+pg16czZVFXrl0bKb2r5PiaOpdn2Fmlwaa1z9/bysG3rCSV5SLgUJ4R+62pk8UrzKC8r3ABILvLnkDelceMZJBXLm79ZmcSL6VZ3KKZAxM+X9gpoqi3TBSj9vB/OpdUAPz/mNonUWSU5fHkbF+UpPWYQGBgz1F1Iu3nTdgNFxA7yQ4NMbyeAA9ir1T0O18DVGRZp4xtPB6jkOSY8yzNk+VF8QSd1VWOet5cVrLOYsXfEOhgwwcl39ellVnP0jkHz6MPI3OcVtof5xX9oKTDZdRU+Fojahw6MKOJf06ThtnT07+ldpJXTG0= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:06:57 np0005475493 python3[1336]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:06:58 np0005475493 python3[1435]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  8 05:06:58 np0005475493 python3[1506]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759914418.142136-251-248779595669966/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=0e07e78396794ac580c5f2d1d33f7e10_id_rsa follow=False checksum=bf7da7a5da71175c68fe99de2c0a4da4e66ecbd4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:06:59 np0005475493 python3[1630]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  8 05:06:59 np0005475493 python3[1701]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759914419.106228-306-142776437604594/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=0e07e78396794ac580c5f2d1d33f7e10_id_rsa.pub follow=False checksum=7e2a4273ddd70a29398d6f290ff6fb3351190f55 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:07:01 np0005475493 python3[1749]: ansible-ping Invoked with data=pong
Oct  8 05:07:02 np0005475493 python3[1773]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:07:05 np0005475493 python3[1831]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct  8 05:07:07 np0005475493 python3[1863]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:07:07 np0005475493 python3[1887]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:07:07 np0005475493 python3[1911]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:07:08 np0005475493 python3[1935]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:07:08 np0005475493 python3[1959]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:07:08 np0005475493 python3[1983]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:07:10 np0005475493 python3[2009]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:07:11 np0005475493 python3[2087]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  8 05:07:12 np0005475493 python3[2160]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759914431.0952034-31-198404192533432/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:07:12 np0005475493 python3[2208]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:13 np0005475493 python3[2232]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:13 np0005475493 python3[2256]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:13 np0005475493 python3[2280]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:14 np0005475493 python3[2304]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:14 np0005475493 python3[2328]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:14 np0005475493 python3[2352]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:14 np0005475493 python3[2376]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:15 np0005475493 python3[2400]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:15 np0005475493 python3[2424]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:15 np0005475493 python3[2448]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:16 np0005475493 python3[2472]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:16 np0005475493 python3[2496]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:16 np0005475493 python3[2520]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:16 np0005475493 python3[2544]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:17 np0005475493 python3[2568]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:17 np0005475493 python3[2592]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:17 np0005475493 python3[2616]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:18 np0005475493 python3[2640]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:18 np0005475493 python3[2664]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:18 np0005475493 python3[2688]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:18 np0005475493 python3[2712]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:19 np0005475493 python3[2736]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:19 np0005475493 python3[2761]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:19 np0005475493 python3[2785]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:20 np0005475493 python3[2809]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:07:22 np0005475493 python3[2835]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  8 05:07:23 np0005475493 systemd[1]: Starting Time & Date Service...
Oct  8 05:07:23 np0005475493 systemd[1]: Started Time & Date Service.
Oct  8 05:07:23 np0005475493 systemd-timedated[2837]: Changed time zone to 'UTC' (UTC).
Oct  8 05:07:24 np0005475493 python3[2866]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:07:25 np0005475493 python3[2942]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  8 05:07:25 np0005475493 python3[3013]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1759914444.7618375-251-214979393493625/source _original_basename=tmp8stxha5e follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:07:25 np0005475493 python3[3113]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  8 05:07:26 np0005475493 python3[3184]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759914445.6647723-301-271847227165989/source _original_basename=tmp0njx4hg5 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:07:27 np0005475493 python3[3286]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  8 05:07:27 np0005475493 python3[3359]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759914446.9575613-381-66282470237247/source _original_basename=tmpksyt5gjv follow=False checksum=332c94ac911d053598365a4ff7b72c4143f36dd6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:07:28 np0005475493 python3[3407]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:07:28 np0005475493 python3[3433]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:07:29 np0005475493 python3[3513]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  8 05:07:29 np0005475493 python3[3586]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1759914448.7362556-451-201566924643088/source _original_basename=tmpumu_mpiu follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:07:30 np0005475493 python3[3637]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-8cbd-24c0-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:07:30 np0005475493 python3[3665]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-8cbd-24c0-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct  8 05:07:32 np0005475493 python3[3693]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:07:49 np0005475493 python3[3719]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:07:54 np0005475493 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  8 05:08:28 np0005475493 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct  8 05:08:28 np0005475493 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct  8 05:08:28 np0005475493 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct  8 05:08:28 np0005475493 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct  8 05:08:28 np0005475493 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct  8 05:08:28 np0005475493 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct  8 05:08:28 np0005475493 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct  8 05:08:28 np0005475493 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct  8 05:08:28 np0005475493 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct  8 05:08:28 np0005475493 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct  8 05:08:28 np0005475493 NetworkManager[858]: <info>  [1759914508.9499] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  8 05:08:28 np0005475493 systemd-udevd[3722]: Network interface NamePolicy= disabled on kernel command line.
Oct  8 05:08:28 np0005475493 NetworkManager[858]: <info>  [1759914508.9745] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  8 05:08:28 np0005475493 NetworkManager[858]: <info>  [1759914508.9769] settings: (eth1): created default wired connection 'Wired connection 1'
Oct  8 05:08:28 np0005475493 NetworkManager[858]: <info>  [1759914508.9773] device (eth1): carrier: link connected
Oct  8 05:08:28 np0005475493 NetworkManager[858]: <info>  [1759914508.9774] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  8 05:08:28 np0005475493 NetworkManager[858]: <info>  [1759914508.9779] policy: auto-activating connection 'Wired connection 1' (aa7d912d-605e-338f-afad-61058792d4cf)
Oct  8 05:08:28 np0005475493 NetworkManager[858]: <info>  [1759914508.9783] device (eth1): Activation: starting connection 'Wired connection 1' (aa7d912d-605e-338f-afad-61058792d4cf)
Oct  8 05:08:28 np0005475493 NetworkManager[858]: <info>  [1759914508.9784] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  8 05:08:28 np0005475493 NetworkManager[858]: <info>  [1759914508.9787] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  8 05:08:28 np0005475493 NetworkManager[858]: <info>  [1759914508.9792] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  8 05:08:28 np0005475493 NetworkManager[858]: <info>  [1759914508.9796] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  8 05:08:29 np0005475493 python3[3749]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-9636-9f2e-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:08:39 np0005475493 python3[3830]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  8 05:08:40 np0005475493 python3[3903]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759914519.61463-104-26712893709965/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=b44b64f1176e3f41f137901c4d0c65fc49f732d5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:08:41 np0005475493 python3[3953]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  8 05:08:41 np0005475493 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  8 05:08:41 np0005475493 systemd[1]: Stopped Network Manager Wait Online.
Oct  8 05:08:41 np0005475493 systemd[1]: Stopping Network Manager Wait Online...
Oct  8 05:08:41 np0005475493 systemd[1]: Stopping Network Manager...
Oct  8 05:08:41 np0005475493 NetworkManager[858]: <info>  [1759914521.2358] caught SIGTERM, shutting down normally.
Oct  8 05:08:41 np0005475493 NetworkManager[858]: <info>  [1759914521.2368] dhcp4 (eth0): canceled DHCP transaction
Oct  8 05:08:41 np0005475493 NetworkManager[858]: <info>  [1759914521.2368] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  8 05:08:41 np0005475493 NetworkManager[858]: <info>  [1759914521.2368] dhcp4 (eth0): state changed no lease
Oct  8 05:08:41 np0005475493 NetworkManager[858]: <info>  [1759914521.2370] manager: NetworkManager state is now CONNECTING
Oct  8 05:08:41 np0005475493 NetworkManager[858]: <info>  [1759914521.2560] dhcp4 (eth1): canceled DHCP transaction
Oct  8 05:08:41 np0005475493 NetworkManager[858]: <info>  [1759914521.2561] dhcp4 (eth1): state changed no lease
Oct  8 05:08:41 np0005475493 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  8 05:08:41 np0005475493 NetworkManager[858]: <info>  [1759914521.2609] exiting (success)
Oct  8 05:08:41 np0005475493 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  8 05:08:41 np0005475493 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  8 05:08:41 np0005475493 systemd[1]: Stopped Network Manager.
Oct  8 05:08:41 np0005475493 systemd[1]: NetworkManager.service: Consumed 2.818s CPU time, 10.0M memory peak.
Oct  8 05:08:41 np0005475493 systemd[1]: Starting Network Manager...
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.3406] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:82191aaa-5b9a-46b2-ace7-0656efb209fc)
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.3409] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.3480] manager[0x556130e34070]: monitoring kernel firmware directory '/lib/firmware'.
Oct  8 05:08:41 np0005475493 systemd[1]: Starting Hostname Service...
Oct  8 05:08:41 np0005475493 systemd[1]: Started Hostname Service.
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4499] hostname: hostname: using hostnamed
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4502] hostname: static hostname changed from (none) to "np0005475493.novalocal"
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4507] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4512] manager[0x556130e34070]: rfkill: Wi-Fi hardware radio set enabled
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4512] manager[0x556130e34070]: rfkill: WWAN hardware radio set enabled
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4539] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4539] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4540] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4541] manager: Networking is enabled by state file
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4543] settings: Loaded settings plugin: keyfile (internal)
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4547] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4572] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4581] dhcp: init: Using DHCP client 'internal'
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4583] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4586] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4590] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4596] device (lo): Activation: starting connection 'lo' (04954bd0-4d1f-4562-9334-15a987bf371b)
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4601] device (eth0): carrier: link connected
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4604] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4607] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4607] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4611] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4616] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4621] device (eth1): carrier: link connected
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4624] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4627] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (aa7d912d-605e-338f-afad-61058792d4cf) (indicated)
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4627] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4631] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4635] device (eth1): Activation: starting connection 'Wired connection 1' (aa7d912d-605e-338f-afad-61058792d4cf)
Oct  8 05:08:41 np0005475493 systemd[1]: Started Network Manager.
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4640] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4643] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4645] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4646] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4647] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4649] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4651] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4653] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4655] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4669] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4671] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4684] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4688] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4710] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4712] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4717] device (lo): Activation: successful, device activated.
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4725] dhcp4 (eth0): state changed new lease, address=38.102.83.224
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4732] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  8 05:08:41 np0005475493 systemd[1]: Starting Network Manager Wait Online...
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4900] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4940] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4942] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4947] manager: NetworkManager state is now CONNECTED_SITE
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4952] device (eth0): Activation: successful, device activated.
Oct  8 05:08:41 np0005475493 NetworkManager[3964]: <info>  [1759914521.4959] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  8 05:08:41 np0005475493 python3[4039]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-9636-9f2e-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:08:51 np0005475493 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  8 05:09:11 np0005475493 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  8 05:09:26 np0005475493 NetworkManager[3964]: <info>  [1759914566.3104] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  8 05:09:26 np0005475493 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  8 05:09:26 np0005475493 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  8 05:09:26 np0005475493 NetworkManager[3964]: <info>  [1759914566.3379] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  8 05:09:26 np0005475493 NetworkManager[3964]: <info>  [1759914566.3382] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  8 05:09:26 np0005475493 NetworkManager[3964]: <info>  [1759914566.3394] device (eth1): Activation: successful, device activated.
Oct  8 05:09:26 np0005475493 NetworkManager[3964]: <info>  [1759914566.3405] manager: startup complete
Oct  8 05:09:26 np0005475493 NetworkManager[3964]: <info>  [1759914566.3408] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct  8 05:09:26 np0005475493 NetworkManager[3964]: <warn>  [1759914566.3418] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct  8 05:09:26 np0005475493 NetworkManager[3964]: <info>  [1759914566.3428] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct  8 05:09:26 np0005475493 systemd[1]: Finished Network Manager Wait Online.
Oct  8 05:09:26 np0005475493 NetworkManager[3964]: <info>  [1759914566.3541] dhcp4 (eth1): canceled DHCP transaction
Oct  8 05:09:26 np0005475493 NetworkManager[3964]: <info>  [1759914566.3541] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  8 05:09:26 np0005475493 NetworkManager[3964]: <info>  [1759914566.3542] dhcp4 (eth1): state changed no lease
Oct  8 05:09:26 np0005475493 NetworkManager[3964]: <info>  [1759914566.3568] policy: auto-activating connection 'ci-private-network' (f3e90ac0-ed6a-5434-b062-a53261128ad5)
Oct  8 05:09:26 np0005475493 NetworkManager[3964]: <info>  [1759914566.3576] device (eth1): Activation: starting connection 'ci-private-network' (f3e90ac0-ed6a-5434-b062-a53261128ad5)
Oct  8 05:09:26 np0005475493 NetworkManager[3964]: <info>  [1759914566.3577] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  8 05:09:26 np0005475493 NetworkManager[3964]: <info>  [1759914566.3582] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  8 05:09:26 np0005475493 NetworkManager[3964]: <info>  [1759914566.3595] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  8 05:09:26 np0005475493 NetworkManager[3964]: <info>  [1759914566.3607] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  8 05:09:26 np0005475493 NetworkManager[3964]: <info>  [1759914566.4211] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  8 05:09:26 np0005475493 NetworkManager[3964]: <info>  [1759914566.4214] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  8 05:09:26 np0005475493 NetworkManager[3964]: <info>  [1759914566.4222] device (eth1): Activation: successful, device activated.
Oct  8 05:09:28 np0005475493 systemd[1076]: Starting Mark boot as successful...
Oct  8 05:09:28 np0005475493 systemd[1076]: Finished Mark boot as successful.
Oct  8 05:09:36 np0005475493 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  8 05:09:41 np0005475493 systemd-logind[798]: Session 1 logged out. Waiting for processes to exit.
Oct  8 05:10:45 np0005475493 systemd-logind[798]: New session 3 of user zuul.
Oct  8 05:10:45 np0005475493 systemd[1]: Started Session 3 of User zuul.
Oct  8 05:10:45 np0005475493 python3[4150]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  8 05:10:46 np0005475493 python3[4223]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759914645.4919796-373-274740485622830/source _original_basename=tmpzy5f1wjf follow=False checksum=12754a60c85d51e037de99da2edf9af2b613c919 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:10:50 np0005475493 systemd[1]: session-3.scope: Deactivated successfully.
Oct  8 05:10:50 np0005475493 systemd-logind[798]: Session 3 logged out. Waiting for processes to exit.
Oct  8 05:10:50 np0005475493 systemd-logind[798]: Removed session 3.
Oct  8 05:12:28 np0005475493 systemd[1076]: Created slice User Background Tasks Slice.
Oct  8 05:12:28 np0005475493 systemd[1076]: Starting Cleanup of User's Temporary Files and Directories...
Oct  8 05:12:28 np0005475493 systemd[1076]: Finished Cleanup of User's Temporary Files and Directories.
Oct  8 05:15:55 np0005475493 systemd[1]: Starting Cleanup of Temporary Directories...
Oct  8 05:15:55 np0005475493 systemd-logind[798]: New session 4 of user zuul.
Oct  8 05:15:55 np0005475493 systemd[1]: Started Session 4 of User zuul.
Oct  8 05:15:55 np0005475493 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct  8 05:15:55 np0005475493 systemd[1]: Finished Cleanup of Temporary Directories.
Oct  8 05:15:55 np0005475493 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct  8 05:15:56 np0005475493 python3[4289]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-1895-3e92-000000001cfa-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:15:56 np0005475493 python3[4317]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:15:56 np0005475493 python3[4344]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:15:57 np0005475493 python3[4370]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:15:57 np0005475493 python3[4396]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:15:57 np0005475493 python3[4422]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:15:57 np0005475493 python3[4422]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct  8 05:15:59 np0005475493 python3[4448]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  8 05:15:59 np0005475493 systemd[1]: Reloading.
Oct  8 05:15:59 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:15:59 np0005475493 systemd[1]: Starting dnf makecache...
Oct  8 05:15:59 np0005475493 dnf[4479]: Failed determining last makecache time.
Oct  8 05:16:00 np0005475493 dnf[4479]: CentOS Stream 9 - BaseOS                         24 kB/s | 6.7 kB     00:00
Oct  8 05:16:00 np0005475493 dnf[4479]: CentOS Stream 9 - AppStream                      63 kB/s | 6.8 kB     00:00
Oct  8 05:16:00 np0005475493 dnf[4479]: CentOS Stream 9 - CRB                            75 kB/s | 6.6 kB     00:00
Oct  8 05:16:00 np0005475493 dnf[4479]: CentOS Stream 9 - Extras packages                74 kB/s | 8.0 kB     00:00
Oct  8 05:16:00 np0005475493 python3[4511]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct  8 05:16:01 np0005475493 dnf[4479]: Metadata cache created.
Oct  8 05:16:01 np0005475493 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  8 05:16:01 np0005475493 systemd[1]: Finished dnf makecache.
Oct  8 05:16:01 np0005475493 python3[4538]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:16:01 np0005475493 python3[4566]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:16:01 np0005475493 python3[4594]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:16:02 np0005475493 python3[4622]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:16:02 np0005475493 python3[4649]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-1895-3e92-000000001d00-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:16:03 np0005475493 python3[4679]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:16:05 np0005475493 systemd[1]: session-4.scope: Deactivated successfully.
Oct  8 05:16:05 np0005475493 systemd[1]: session-4.scope: Consumed 3.638s CPU time.
Oct  8 05:16:05 np0005475493 systemd-logind[798]: Session 4 logged out. Waiting for processes to exit.
Oct  8 05:16:05 np0005475493 systemd-logind[798]: Removed session 4.
Oct  8 05:16:07 np0005475493 systemd-logind[798]: New session 5 of user zuul.
Oct  8 05:16:07 np0005475493 systemd[1]: Started Session 5 of User zuul.
Oct  8 05:16:07 np0005475493 python3[4714]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  8 05:16:21 np0005475493 kernel: SELinux:  Converting 365 SID table entries...
Oct  8 05:16:21 np0005475493 kernel: SELinux:  policy capability network_peer_controls=1
Oct  8 05:16:21 np0005475493 kernel: SELinux:  policy capability open_perms=1
Oct  8 05:16:21 np0005475493 kernel: SELinux:  policy capability extended_socket_class=1
Oct  8 05:16:21 np0005475493 kernel: SELinux:  policy capability always_check_network=0
Oct  8 05:16:21 np0005475493 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  8 05:16:21 np0005475493 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  8 05:16:21 np0005475493 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  8 05:16:30 np0005475493 kernel: SELinux:  Converting 365 SID table entries...
Oct  8 05:16:30 np0005475493 kernel: SELinux:  policy capability network_peer_controls=1
Oct  8 05:16:30 np0005475493 kernel: SELinux:  policy capability open_perms=1
Oct  8 05:16:30 np0005475493 kernel: SELinux:  policy capability extended_socket_class=1
Oct  8 05:16:30 np0005475493 kernel: SELinux:  policy capability always_check_network=0
Oct  8 05:16:30 np0005475493 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  8 05:16:30 np0005475493 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  8 05:16:30 np0005475493 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  8 05:16:39 np0005475493 kernel: SELinux:  Converting 365 SID table entries...
Oct  8 05:16:39 np0005475493 kernel: SELinux:  policy capability network_peer_controls=1
Oct  8 05:16:39 np0005475493 kernel: SELinux:  policy capability open_perms=1
Oct  8 05:16:39 np0005475493 kernel: SELinux:  policy capability extended_socket_class=1
Oct  8 05:16:39 np0005475493 kernel: SELinux:  policy capability always_check_network=0
Oct  8 05:16:39 np0005475493 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  8 05:16:39 np0005475493 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  8 05:16:39 np0005475493 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  8 05:16:40 np0005475493 setsebool[4774]: The virt_use_nfs policy boolean was changed to 1 by root
Oct  8 05:16:40 np0005475493 setsebool[4774]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct  8 05:16:50 np0005475493 kernel: SELinux:  Converting 368 SID table entries...
Oct  8 05:16:50 np0005475493 kernel: SELinux:  policy capability network_peer_controls=1
Oct  8 05:16:50 np0005475493 kernel: SELinux:  policy capability open_perms=1
Oct  8 05:16:50 np0005475493 kernel: SELinux:  policy capability extended_socket_class=1
Oct  8 05:16:50 np0005475493 kernel: SELinux:  policy capability always_check_network=0
Oct  8 05:16:50 np0005475493 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  8 05:16:50 np0005475493 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  8 05:16:50 np0005475493 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  8 05:17:08 np0005475493 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  8 05:17:08 np0005475493 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  8 05:17:08 np0005475493 systemd[1]: Starting man-db-cache-update.service...
Oct  8 05:17:08 np0005475493 systemd[1]: Reloading.
Oct  8 05:17:08 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:17:08 np0005475493 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  8 05:17:09 np0005475493 systemd[1]: Starting PackageKit Daemon...
Oct  8 05:17:09 np0005475493 systemd[1]: Starting Authorization Manager...
Oct  8 05:17:09 np0005475493 polkitd[6524]: Started polkitd version 0.117
Oct  8 05:17:09 np0005475493 systemd[1]: Started Authorization Manager.
Oct  8 05:17:09 np0005475493 systemd[1]: Started PackageKit Daemon.
Oct  8 05:17:19 np0005475493 python3[12623]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-de16-2c75-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:17:20 np0005475493 kernel: evm: overlay not supported
Oct  8 05:17:20 np0005475493 systemd[1076]: Starting D-Bus User Message Bus...
Oct  8 05:17:20 np0005475493 dbus-broker-launch[13083]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct  8 05:17:20 np0005475493 dbus-broker-launch[13083]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct  8 05:17:20 np0005475493 systemd[1076]: Started D-Bus User Message Bus.
Oct  8 05:17:20 np0005475493 dbus-broker-lau[13083]: Ready
Oct  8 05:17:20 np0005475493 systemd[1076]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  8 05:17:20 np0005475493 systemd[1076]: Created slice Slice /user.
Oct  8 05:17:20 np0005475493 systemd[1076]: podman-13017.scope: unit configures an IP firewall, but not running as root.
Oct  8 05:17:20 np0005475493 systemd[1076]: (This warning is only shown for the first unit using IP firewalling.)
Oct  8 05:17:20 np0005475493 systemd[1076]: Started podman-13017.scope.
Oct  8 05:17:20 np0005475493 systemd[1076]: Started podman-pause-6c5a7e9b.scope.
Oct  8 05:17:21 np0005475493 systemd-logind[798]: Session 5 logged out. Waiting for processes to exit.
Oct  8 05:17:21 np0005475493 systemd[1]: session-5.scope: Deactivated successfully.
Oct  8 05:17:21 np0005475493 systemd[1]: session-5.scope: Consumed 58.328s CPU time.
Oct  8 05:17:21 np0005475493 systemd-logind[798]: Removed session 5.
Oct  8 05:17:41 np0005475493 systemd-logind[798]: New session 6 of user zuul.
Oct  8 05:17:41 np0005475493 systemd[1]: Started Session 6 of User zuul.
Oct  8 05:17:41 np0005475493 python3[21853]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFow9zNj0F2oq3a4hO/hQaH1lByiJoA0MoTlM589f3ghYSo6Jcv/wEhMSCUcvqB63vjWwEbrK0sbWxkmWWzauzE= zuul@np0005475492.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:17:42 np0005475493 python3[22080]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFow9zNj0F2oq3a4hO/hQaH1lByiJoA0MoTlM589f3ghYSo6Jcv/wEhMSCUcvqB63vjWwEbrK0sbWxkmWWzauzE= zuul@np0005475492.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:17:43 np0005475493 python3[22545]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005475493.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct  8 05:17:43 np0005475493 python3[22812]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFow9zNj0F2oq3a4hO/hQaH1lByiJoA0MoTlM589f3ghYSo6Jcv/wEhMSCUcvqB63vjWwEbrK0sbWxkmWWzauzE= zuul@np0005475492.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  8 05:17:44 np0005475493 python3[23115]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  8 05:17:44 np0005475493 python3[23370]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759915063.821309-150-172706630808486/source _original_basename=tmph4l_6nk3 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:17:45 np0005475493 python3[23783]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Oct  8 05:17:45 np0005475493 systemd[1]: Starting Hostname Service...
Oct  8 05:17:45 np0005475493 systemd[1]: Started Hostname Service.
Oct  8 05:17:45 np0005475493 systemd-hostnamed[23904]: Changed pretty hostname to 'compute-0'
Oct  8 05:17:45 np0005475493 systemd-hostnamed[23904]: Hostname set to <compute-0> (static)
Oct  8 05:17:45 np0005475493 NetworkManager[3964]: <info>  [1759915065.5703] hostname: static hostname changed from "np0005475493.novalocal" to "compute-0"
Oct  8 05:17:45 np0005475493 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  8 05:17:45 np0005475493 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  8 05:17:46 np0005475493 systemd[1]: session-6.scope: Deactivated successfully.
Oct  8 05:17:46 np0005475493 systemd[1]: session-6.scope: Consumed 2.031s CPU time.
Oct  8 05:17:46 np0005475493 systemd-logind[798]: Session 6 logged out. Waiting for processes to exit.
Oct  8 05:17:46 np0005475493 systemd-logind[798]: Removed session 6.
Oct  8 05:17:52 np0005475493 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  8 05:17:52 np0005475493 systemd[1]: Finished man-db-cache-update.service.
Oct  8 05:17:52 np0005475493 systemd[1]: man-db-cache-update.service: Consumed 53.006s CPU time.
Oct  8 05:17:52 np0005475493 systemd[1]: run-r332a0f7ba49b44cf913cf9270793d67b.service: Deactivated successfully.
Oct  8 05:17:55 np0005475493 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  8 05:18:15 np0005475493 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  8 05:21:15 np0005475493 systemd-logind[798]: New session 7 of user zuul.
Oct  8 05:21:15 np0005475493 systemd[1]: Started Session 7 of User zuul.
Oct  8 05:21:16 np0005475493 python3[26650]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:21:18 np0005475493 python3[26766]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  8 05:21:18 np0005475493 python3[26839]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759915278.0419018-30577-222677633616895/source mode=0755 _original_basename=delorean.repo follow=False checksum=c02c26d38f431b15f6463fc53c3d93ed5138ff07 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:21:19 np0005475493 python3[26865]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  8 05:21:19 np0005475493 python3[26938]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759915278.0419018-30577-222677633616895/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:21:19 np0005475493 python3[26964]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  8 05:21:20 np0005475493 python3[27037]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759915278.0419018-30577-222677633616895/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:21:20 np0005475493 python3[27063]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  8 05:21:20 np0005475493 python3[27136]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759915278.0419018-30577-222677633616895/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:21:20 np0005475493 python3[27162]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  8 05:21:21 np0005475493 python3[27235]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759915278.0419018-30577-222677633616895/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:21:21 np0005475493 python3[27261]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  8 05:21:21 np0005475493 python3[27334]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759915278.0419018-30577-222677633616895/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:21:22 np0005475493 python3[27360]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  8 05:21:22 np0005475493 python3[27433]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759915278.0419018-30577-222677633616895/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=75ca8f9fe9a538824fd094f239c30e8ce8652e8a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:21:34 np0005475493 python3[27491]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:22:15 np0005475493 systemd[1]: packagekit.service: Deactivated successfully.
Oct  8 05:26:34 np0005475493 systemd[1]: session-7.scope: Deactivated successfully.
Oct  8 05:26:34 np0005475493 systemd-logind[798]: Session 7 logged out. Waiting for processes to exit.
Oct  8 05:26:34 np0005475493 systemd[1]: session-7.scope: Consumed 4.760s CPU time.
Oct  8 05:26:34 np0005475493 systemd-logind[798]: Removed session 7.
Oct  8 05:33:21 np0005475493 systemd-logind[798]: New session 8 of user zuul.
Oct  8 05:33:21 np0005475493 systemd[1]: Started Session 8 of User zuul.
Oct  8 05:33:22 np0005475493 python3.9[27656]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:33:23 np0005475493 python3.9[27837]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:33:31 np0005475493 systemd[1]: session-8.scope: Deactivated successfully.
Oct  8 05:33:31 np0005475493 systemd[1]: session-8.scope: Consumed 7.599s CPU time.
Oct  8 05:33:31 np0005475493 systemd-logind[798]: Session 8 logged out. Waiting for processes to exit.
Oct  8 05:33:31 np0005475493 systemd-logind[798]: Removed session 8.
Oct  8 05:33:47 np0005475493 systemd-logind[798]: New session 9 of user zuul.
Oct  8 05:33:47 np0005475493 systemd[1]: Started Session 9 of User zuul.
Oct  8 05:33:48 np0005475493 python3.9[28051]: ansible-ansible.legacy.ping Invoked with data=pong
Oct  8 05:33:49 np0005475493 python3.9[28225]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:33:50 np0005475493 python3.9[28377]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:33:51 np0005475493 python3.9[28530]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:33:52 np0005475493 python3.9[28682]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:33:53 np0005475493 python3.9[28834]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:33:54 np0005475493 python3.9[28957]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759916032.9931462-177-150272107467709/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:33:54 np0005475493 python3.9[29109]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:33:55 np0005475493 python3.9[29265]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:33:56 np0005475493 python3.9[29415]: ansible-ansible.builtin.service_facts Invoked
Oct  8 05:34:00 np0005475493 python3.9[29670]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:34:01 np0005475493 python3.9[29820]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:34:02 np0005475493 python3.9[29974]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:34:03 np0005475493 python3.9[30132]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  8 05:34:04 np0005475493 python3.9[30216]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  8 05:34:47 np0005475493 systemd[1]: Reloading.
Oct  8 05:34:47 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:34:48 np0005475493 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct  8 05:34:48 np0005475493 systemd[1]: Reloading.
Oct  8 05:34:48 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:34:48 np0005475493 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct  8 05:34:48 np0005475493 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct  8 05:34:48 np0005475493 systemd[1]: Reloading.
Oct  8 05:34:48 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:34:48 np0005475493 systemd[1]: Listening on LVM2 poll daemon socket.
Oct  8 05:34:49 np0005475493 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Oct  8 05:34:49 np0005475493 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Oct  8 05:34:49 np0005475493 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Oct  8 05:35:48 np0005475493 kernel: SELinux:  Converting 2714 SID table entries...
Oct  8 05:35:48 np0005475493 kernel: SELinux:  policy capability network_peer_controls=1
Oct  8 05:35:48 np0005475493 kernel: SELinux:  policy capability open_perms=1
Oct  8 05:35:48 np0005475493 kernel: SELinux:  policy capability extended_socket_class=1
Oct  8 05:35:48 np0005475493 kernel: SELinux:  policy capability always_check_network=0
Oct  8 05:35:48 np0005475493 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  8 05:35:48 np0005475493 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  8 05:35:48 np0005475493 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  8 05:35:48 np0005475493 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct  8 05:35:48 np0005475493 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  8 05:35:48 np0005475493 systemd[1]: Starting man-db-cache-update.service...
Oct  8 05:35:48 np0005475493 systemd[1]: Reloading.
Oct  8 05:35:48 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:35:49 np0005475493 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  8 05:35:49 np0005475493 systemd[1]: Starting PackageKit Daemon...
Oct  8 05:35:49 np0005475493 systemd[1]: Started PackageKit Daemon.
Oct  8 05:35:49 np0005475493 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  8 05:35:49 np0005475493 systemd[1]: Finished man-db-cache-update.service.
Oct  8 05:35:49 np0005475493 systemd[1]: man-db-cache-update.service: Consumed 1.187s CPU time.
Oct  8 05:35:49 np0005475493 systemd[1]: run-r8b2586d03d284a82b696869aad06d2e0.service: Deactivated successfully.
Oct  8 05:36:02 np0005475493 python3.9[31723]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:36:05 np0005475493 python3.9[32004]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct  8 05:36:06 np0005475493 python3.9[32156]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct  8 05:36:08 np0005475493 python3.9[32310]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:36:11 np0005475493 python3.9[32462]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct  8 05:36:14 np0005475493 python3.9[32614]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:36:19 np0005475493 python3.9[32767]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:36:19 np0005475493 python3.9[32890]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916178.975777-639-81543345732600/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9b1ec9ef1baf0871d11fb19dd2fc6e37ec07cf31 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:36:22 np0005475493 python3.9[33042]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct  8 05:36:23 np0005475493 python3.9[33195]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  8 05:36:24 np0005475493 python3.9[33353]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  8 05:36:24 np0005475493 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  8 05:36:24 np0005475493 python3.9[33514]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct  8 05:36:25 np0005475493 python3.9[33667]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  8 05:36:26 np0005475493 python3.9[33825]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct  8 05:36:27 np0005475493 python3.9[33977]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  8 05:36:30 np0005475493 python3.9[34130]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:36:30 np0005475493 python3.9[34282]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:36:31 np0005475493 python3.9[34405]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759916190.3191996-924-181292877450218/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:36:32 np0005475493 python3.9[34557]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  8 05:36:32 np0005475493 systemd[1]: Starting Load Kernel Modules...
Oct  8 05:36:32 np0005475493 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct  8 05:36:32 np0005475493 kernel: Bridge firewalling registered
Oct  8 05:36:32 np0005475493 systemd-modules-load[34561]: Inserted module 'br_netfilter'
Oct  8 05:36:32 np0005475493 systemd[1]: Finished Load Kernel Modules.
Oct  8 05:36:33 np0005475493 python3.9[34718]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:36:34 np0005475493 python3.9[34841]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759916193.0597684-993-82201197601429/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:36:35 np0005475493 python3.9[34993]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  8 05:36:38 np0005475493 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Oct  8 05:36:38 np0005475493 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Oct  8 05:36:38 np0005475493 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  8 05:36:38 np0005475493 systemd[1]: Starting man-db-cache-update.service...
Oct  8 05:36:38 np0005475493 systemd[1]: Reloading.
Oct  8 05:36:38 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:36:38 np0005475493 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  8 05:36:40 np0005475493 python3.9[37077]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:36:41 np0005475493 python3.9[38197]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct  8 05:36:42 np0005475493 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  8 05:36:42 np0005475493 systemd[1]: Finished man-db-cache-update.service.
Oct  8 05:36:42 np0005475493 systemd[1]: man-db-cache-update.service: Consumed 4.415s CPU time.
Oct  8 05:36:42 np0005475493 systemd[1]: run-r64aa309c0fe649c490af704593ff1ca8.service: Deactivated successfully.
Oct  8 05:36:42 np0005475493 python3.9[39004]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:36:43 np0005475493 python3.9[39157]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:36:43 np0005475493 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  8 05:36:43 np0005475493 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  8 05:36:44 np0005475493 python3.9[39530]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:36:44 np0005475493 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct  8 05:36:45 np0005475493 systemd[1]: tuned.service: Deactivated successfully.
Oct  8 05:36:45 np0005475493 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct  8 05:36:45 np0005475493 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  8 05:36:45 np0005475493 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  8 05:36:45 np0005475493 python3.9[39691]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct  8 05:36:50 np0005475493 python3.9[39843]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:36:50 np0005475493 systemd[1]: Reloading.
Oct  8 05:36:50 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:36:50 np0005475493 python3.9[40032]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:36:51 np0005475493 systemd[1]: Reloading.
Oct  8 05:36:51 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:36:52 np0005475493 python3.9[40220]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:36:53 np0005475493 python3.9[40373]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:36:53 np0005475493 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct  8 05:36:53 np0005475493 python3.9[40526]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:36:55 np0005475493 python3.9[40688]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:36:57 np0005475493 python3.9[40841]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  8 05:36:57 np0005475493 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  8 05:36:57 np0005475493 systemd[1]: Stopped Apply Kernel Variables.
Oct  8 05:36:57 np0005475493 systemd[1]: Stopping Apply Kernel Variables...
Oct  8 05:36:57 np0005475493 systemd[1]: Starting Apply Kernel Variables...
Oct  8 05:36:57 np0005475493 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct  8 05:36:57 np0005475493 systemd[1]: Finished Apply Kernel Variables.
Oct  8 05:36:57 np0005475493 systemd[1]: session-9.scope: Deactivated successfully.
Oct  8 05:36:57 np0005475493 systemd-logind[798]: Session 9 logged out. Waiting for processes to exit.
Oct  8 05:36:57 np0005475493 systemd[1]: session-9.scope: Consumed 2min 7.380s CPU time.
Oct  8 05:36:57 np0005475493 systemd-logind[798]: Removed session 9.
Oct  8 05:37:04 np0005475493 systemd-logind[798]: New session 10 of user zuul.
Oct  8 05:37:04 np0005475493 systemd[1]: Started Session 10 of User zuul.
Oct  8 05:37:05 np0005475493 python3.9[41024]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:37:06 np0005475493 python3.9[41180]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct  8 05:37:07 np0005475493 python3.9[41333]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  8 05:37:08 np0005475493 python3.9[41491]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  8 05:37:09 np0005475493 python3.9[41651]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  8 05:37:10 np0005475493 python3.9[41735]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  8 05:37:13 np0005475493 python3.9[41899]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  8 05:37:25 np0005475493 kernel: SELinux:  Converting 2724 SID table entries...
Oct  8 05:37:25 np0005475493 kernel: SELinux:  policy capability network_peer_controls=1
Oct  8 05:37:25 np0005475493 kernel: SELinux:  policy capability open_perms=1
Oct  8 05:37:25 np0005475493 kernel: SELinux:  policy capability extended_socket_class=1
Oct  8 05:37:25 np0005475493 kernel: SELinux:  policy capability always_check_network=0
Oct  8 05:37:25 np0005475493 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  8 05:37:25 np0005475493 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  8 05:37:25 np0005475493 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  8 05:37:25 np0005475493 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct  8 05:37:25 np0005475493 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct  8 05:37:26 np0005475493 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  8 05:37:26 np0005475493 systemd[1]: Starting man-db-cache-update.service...
Oct  8 05:37:26 np0005475493 systemd[1]: Reloading.
Oct  8 05:37:26 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:37:26 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:37:26 np0005475493 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  8 05:37:27 np0005475493 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  8 05:37:27 np0005475493 systemd[1]: Finished man-db-cache-update.service.
Oct  8 05:37:27 np0005475493 systemd[1]: run-r1d590a782f0d4d958d645f584de39c78.service: Deactivated successfully.
Oct  8 05:37:29 np0005475493 python3.9[43001]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  8 05:37:29 np0005475493 systemd[1]: Reloading.
Oct  8 05:37:29 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:37:29 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:37:29 np0005475493 systemd[1]: Starting Open vSwitch Database Unit...
Oct  8 05:37:29 np0005475493 chown[43043]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct  8 05:37:29 np0005475493 ovs-ctl[43048]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct  8 05:37:29 np0005475493 ovs-ctl[43048]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct  8 05:37:29 np0005475493 ovs-ctl[43048]: Starting ovsdb-server [  OK  ]
Oct  8 05:37:29 np0005475493 ovs-vsctl[43097]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct  8 05:37:30 np0005475493 ovs-vsctl[43116]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"26869918-b723-425c-a2e1-0d697f3d0fec\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct  8 05:37:30 np0005475493 ovs-ctl[43048]: Configuring Open vSwitch system IDs [  OK  ]
Oct  8 05:37:30 np0005475493 ovs-vsctl[43122]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct  8 05:37:30 np0005475493 ovs-ctl[43048]: Enabling remote OVSDB managers [  OK  ]
Oct  8 05:37:30 np0005475493 systemd[1]: Started Open vSwitch Database Unit.
Oct  8 05:37:30 np0005475493 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct  8 05:37:30 np0005475493 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct  8 05:37:30 np0005475493 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct  8 05:37:30 np0005475493 kernel: openvswitch: Open vSwitch switching datapath
Oct  8 05:37:30 np0005475493 ovs-ctl[43166]: Inserting openvswitch module [  OK  ]
Oct  8 05:37:30 np0005475493 ovs-ctl[43135]: Starting ovs-vswitchd [  OK  ]
Oct  8 05:37:30 np0005475493 ovs-vsctl[43184]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct  8 05:37:30 np0005475493 ovs-ctl[43135]: Enabling remote OVSDB managers [  OK  ]
Oct  8 05:37:30 np0005475493 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct  8 05:37:30 np0005475493 systemd[1]: Starting Open vSwitch...
Oct  8 05:37:30 np0005475493 systemd[1]: Finished Open vSwitch.
Oct  8 05:37:31 np0005475493 python3.9[43335]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:37:32 np0005475493 python3.9[43487]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct  8 05:37:33 np0005475493 kernel: SELinux:  Converting 2738 SID table entries...
Oct  8 05:37:33 np0005475493 kernel: SELinux:  policy capability network_peer_controls=1
Oct  8 05:37:33 np0005475493 kernel: SELinux:  policy capability open_perms=1
Oct  8 05:37:33 np0005475493 kernel: SELinux:  policy capability extended_socket_class=1
Oct  8 05:37:33 np0005475493 kernel: SELinux:  policy capability always_check_network=0
Oct  8 05:37:33 np0005475493 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  8 05:37:33 np0005475493 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  8 05:37:33 np0005475493 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  8 05:37:35 np0005475493 python3.9[43643]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:37:36 np0005475493 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct  8 05:37:36 np0005475493 python3.9[43801]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  8 05:37:38 np0005475493 python3.9[43954]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:37:40 np0005475493 python3.9[44241]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  8 05:37:41 np0005475493 python3.9[44391]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:37:41 np0005475493 python3.9[44545]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  8 05:37:43 np0005475493 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  8 05:37:43 np0005475493 systemd[1]: Starting man-db-cache-update.service...
Oct  8 05:37:43 np0005475493 systemd[1]: Reloading.
Oct  8 05:37:43 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:37:43 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:37:43 np0005475493 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  8 05:37:44 np0005475493 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  8 05:37:44 np0005475493 systemd[1]: Finished man-db-cache-update.service.
Oct  8 05:37:44 np0005475493 systemd[1]: run-rb05b86f4cf4040ecba4eae06f91c9fc0.service: Deactivated successfully.
Oct  8 05:37:45 np0005475493 python3.9[44862]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  8 05:37:45 np0005475493 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  8 05:37:45 np0005475493 systemd[1]: Stopped Network Manager Wait Online.
Oct  8 05:37:45 np0005475493 systemd[1]: Stopping Network Manager Wait Online...
Oct  8 05:37:45 np0005475493 systemd[1]: Stopping Network Manager...
Oct  8 05:37:45 np0005475493 NetworkManager[3964]: <info>  [1759916265.4880] caught SIGTERM, shutting down normally.
Oct  8 05:37:45 np0005475493 NetworkManager[3964]: <info>  [1759916265.4892] dhcp4 (eth0): canceled DHCP transaction
Oct  8 05:37:45 np0005475493 NetworkManager[3964]: <info>  [1759916265.4892] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  8 05:37:45 np0005475493 NetworkManager[3964]: <info>  [1759916265.4892] dhcp4 (eth0): state changed no lease
Oct  8 05:37:45 np0005475493 NetworkManager[3964]: <info>  [1759916265.4894] manager: NetworkManager state is now CONNECTED_SITE
Oct  8 05:37:45 np0005475493 NetworkManager[3964]: <info>  [1759916265.4952] exiting (success)
Oct  8 05:37:45 np0005475493 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  8 05:37:45 np0005475493 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  8 05:37:45 np0005475493 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  8 05:37:45 np0005475493 systemd[1]: Stopped Network Manager.
Oct  8 05:37:45 np0005475493 systemd[1]: NetworkManager.service: Consumed 8.594s CPU time, 4.3M memory peak, read 0B from disk, written 15.0K to disk.
Oct  8 05:37:45 np0005475493 systemd[1]: Starting Network Manager...
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.5412] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:82191aaa-5b9a-46b2-ace7-0656efb209fc)
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.5414] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.5469] manager[0x5577884d6090]: monitoring kernel firmware directory '/lib/firmware'.
Oct  8 05:37:45 np0005475493 systemd[1]: Starting Hostname Service...
Oct  8 05:37:45 np0005475493 systemd[1]: Started Hostname Service.
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6252] hostname: hostname: using hostnamed
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6253] hostname: static hostname changed from (none) to "compute-0"
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6257] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6262] manager[0x5577884d6090]: rfkill: Wi-Fi hardware radio set enabled
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6263] manager[0x5577884d6090]: rfkill: WWAN hardware radio set enabled
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6285] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6294] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6295] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6295] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6296] manager: Networking is enabled by state file
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6298] settings: Loaded settings plugin: keyfile (internal)
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6301] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6321] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6329] dhcp: init: Using DHCP client 'internal'
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6331] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6335] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6339] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6346] device (lo): Activation: starting connection 'lo' (04954bd0-4d1f-4562-9334-15a987bf371b)
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6351] device (eth0): carrier: link connected
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6354] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6358] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6358] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6364] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6368] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6373] device (eth1): carrier: link connected
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6376] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6380] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (f3e90ac0-ed6a-5434-b062-a53261128ad5) (indicated)
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6381] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6385] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6390] device (eth1): Activation: starting connection 'ci-private-network' (f3e90ac0-ed6a-5434-b062-a53261128ad5)
Oct  8 05:37:45 np0005475493 systemd[1]: Started Network Manager.
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6401] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6943] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6946] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6948] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6950] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6953] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6955] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6958] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6964] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6970] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6972] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6982] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.6994] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.7005] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.7006] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.7011] device (lo): Activation: successful, device activated.
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.7017] dhcp4 (eth0): state changed new lease, address=38.102.83.224
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.7021] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  8 05:37:45 np0005475493 systemd[1]: Starting Network Manager Wait Online...
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.7076] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.7083] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.7087] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.7090] manager: NetworkManager state is now CONNECTED_LOCAL
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.7095] device (eth1): Activation: successful, device activated.
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.7105] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.7107] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.7110] manager: NetworkManager state is now CONNECTED_SITE
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.7115] device (eth0): Activation: successful, device activated.
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.7119] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  8 05:37:45 np0005475493 NetworkManager[44872]: <info>  [1759916265.7124] manager: startup complete
Oct  8 05:37:45 np0005475493 systemd[1]: Finished Network Manager Wait Online.
Oct  8 05:37:46 np0005475493 python3.9[45089]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  8 05:37:51 np0005475493 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  8 05:37:51 np0005475493 systemd[1]: Starting man-db-cache-update.service...
Oct  8 05:37:51 np0005475493 systemd[1]: Reloading.
Oct  8 05:37:51 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:37:51 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:37:51 np0005475493 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  8 05:37:52 np0005475493 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  8 05:37:52 np0005475493 systemd[1]: Finished man-db-cache-update.service.
Oct  8 05:37:52 np0005475493 systemd[1]: run-rb2ae27d1dfe74f7a9ee49228583760ae.service: Deactivated successfully.
Oct  8 05:37:55 np0005475493 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  8 05:37:56 np0005475493 python3.9[45552]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:37:57 np0005475493 python3.9[45704]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:37:58 np0005475493 python3.9[45858]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:37:59 np0005475493 python3.9[46010]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:38:00 np0005475493 python3.9[46162]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:38:00 np0005475493 python3.9[46314]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:38:02 np0005475493 python3.9[46466]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:38:02 np0005475493 python3.9[46590]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759916281.5762358-647-140299354159452/.source _original_basename=.6atzymsv follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:38:03 np0005475493 python3.9[46742]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:38:04 np0005475493 python3.9[46894]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct  8 05:38:05 np0005475493 python3.9[47046]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:38:07 np0005475493 python3.9[47473]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct  8 05:38:09 np0005475493 ansible-async_wrapper.py[47648]: Invoked with j124972591745 300 /home/zuul/.ansible/tmp/ansible-tmp-1759916288.202535-845-261026550580536/AnsiballZ_edpm_os_net_config.py _
Oct  8 05:38:09 np0005475493 ansible-async_wrapper.py[47651]: Starting module and watcher
Oct  8 05:38:09 np0005475493 ansible-async_wrapper.py[47651]: Start watching 47652 (300)
Oct  8 05:38:09 np0005475493 ansible-async_wrapper.py[47652]: Start module (47652)
Oct  8 05:38:09 np0005475493 ansible-async_wrapper.py[47648]: Return async_wrapper task started.
Oct  8 05:38:09 np0005475493 python3.9[47653]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct  8 05:38:09 np0005475493 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct  8 05:38:09 np0005475493 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct  8 05:38:09 np0005475493 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct  8 05:38:09 np0005475493 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct  8 05:38:09 np0005475493 kernel: cfg80211: failed to load regulatory.db
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8136] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47654 uid=0 result="success"
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8157] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47654 uid=0 result="success"
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8642] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8644] audit: op="connection-add" uuid="d1ef9515-d92f-45d1-94ba-eab87c3ebbc3" name="br-ex-br" pid=47654 uid=0 result="success"
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8660] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8662] audit: op="connection-add" uuid="ee7778aa-9726-4f40-b3e1-89de1d61b1e9" name="br-ex-port" pid=47654 uid=0 result="success"
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8673] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8675] audit: op="connection-add" uuid="05a658b1-434f-4d26-b5c3-25062d421ffd" name="eth1-port" pid=47654 uid=0 result="success"
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8686] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8687] audit: op="connection-add" uuid="0aec10a1-4bab-4b88-b026-e73e6cbe621b" name="vlan20-port" pid=47654 uid=0 result="success"
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8698] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8700] audit: op="connection-add" uuid="ec7377c4-9b96-44a5-b55f-39624ce8ce0f" name="vlan21-port" pid=47654 uid=0 result="success"
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8711] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8713] audit: op="connection-add" uuid="adfe5585-e7bb-479a-a1a4-3f6af82efe8d" name="vlan22-port" pid=47654 uid=0 result="success"
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8723] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8725] audit: op="connection-add" uuid="481b305d-7d8a-4521-b8ec-5eeaa72834b0" name="vlan23-port" pid=47654 uid=0 result="success"
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8743] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,connection.timestamp,ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout" pid=47654 uid=0 result="success"
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8759] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8761] audit: op="connection-add" uuid="2303ad94-5cc0-4641-9983-0a2eee400b01" name="br-ex-if" pid=47654 uid=0 result="success"
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8790] audit: op="connection-update" uuid="f3e90ac0-ed6a-5434-b062-a53261128ad5" name="ci-private-network" args="ovs-interface.type,ipv4.method,ipv4.dns,ipv4.routing-rules,ipv4.routes,ipv4.addresses,ipv4.never-default,connection.slave-type,connection.timestamp,connection.controller,connection.master,connection.port-type,ipv6.method,ipv6.dns,ipv6.routing-rules,ipv6.routes,ipv6.addr-gen-mode,ipv6.addresses,ovs-external-ids.data" pid=47654 uid=0 result="success"
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8805] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8807] audit: op="connection-add" uuid="855c26f1-c03b-4b2e-827d-6aebda727c18" name="vlan20-if" pid=47654 uid=0 result="success"
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8821] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8823] audit: op="connection-add" uuid="832b4d99-c665-4b2d-8400-188b1077c45a" name="vlan21-if" pid=47654 uid=0 result="success"
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8837] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8839] audit: op="connection-add" uuid="18043826-267e-49c3-9d2c-5885a3457256" name="vlan22-if" pid=47654 uid=0 result="success"
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8854] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8856] audit: op="connection-add" uuid="dbd7dc91-45d8-4d7a-9896-ebb9c31fadaa" name="vlan23-if" pid=47654 uid=0 result="success"
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8866] audit: op="connection-delete" uuid="aa7d912d-605e-338f-afad-61058792d4cf" name="Wired connection 1" pid=47654 uid=0 result="success"
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8877] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8887] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8892] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (d1ef9515-d92f-45d1-94ba-eab87c3ebbc3)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8893] audit: op="connection-activate" uuid="d1ef9515-d92f-45d1-94ba-eab87c3ebbc3" name="br-ex-br" pid=47654 uid=0 result="success"
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8895] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8903] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8907] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (ee7778aa-9726-4f40-b3e1-89de1d61b1e9)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8909] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8915] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8919] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (05a658b1-434f-4d26-b5c3-25062d421ffd)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8921] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8927] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8931] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (0aec10a1-4bab-4b88-b026-e73e6cbe621b)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8933] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8939] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8944] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (ec7377c4-9b96-44a5-b55f-39624ce8ce0f)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8946] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8954] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8958] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (adfe5585-e7bb-479a-a1a4-3f6af82efe8d)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8960] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8966] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8971] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (481b305d-7d8a-4521-b8ec-5eeaa72834b0)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8972] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8974] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8976] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8982] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8986] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8991] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (2303ad94-5cc0-4641-9983-0a2eee400b01)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8992] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8996] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8998] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.8999] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9001] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9011] device (eth1): disconnecting for new activation request.
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9012] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9023] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9025] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9026] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9028] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9037] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9039] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (855c26f1-c03b-4b2e-827d-6aebda727c18)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9040] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9043] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9045] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9046] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9048] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9052] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9055] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (832b4d99-c665-4b2d-8400-188b1077c45a)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9056] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9059] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9061] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9062] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9065] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9069] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9073] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (18043826-267e-49c3-9d2c-5885a3457256)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9074] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9077] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9078] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9079] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9082] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9087] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9091] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (dbd7dc91-45d8-4d7a-9896-ebb9c31fadaa)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9092] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9094] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9096] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9097] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9099] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9109] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,ipv6.method,ipv6.addr-gen-mode" pid=47654 uid=0 result="success"
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9111] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9115] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9116] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9122] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9126] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9129] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9132] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9134] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9139] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 kernel: ovs-system: entered promiscuous mode
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9143] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9146] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9148] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 systemd-udevd[47658]: Network interface NamePolicy= disabled on kernel command line.
Oct  8 05:38:10 np0005475493 kernel: Timeout policy base is empty
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9177] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9181] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9185] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9186] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9192] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9195] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9198] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9200] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9206] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9212] dhcp4 (eth0): canceled DHCP transaction
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9213] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9213] dhcp4 (eth0): state changed no lease
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9216] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9234] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9238] audit: op="device-reapply" interface="eth1" ifindex=3 pid=47654 uid=0 result="fail" reason="Device is not activated"
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9245] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9279] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9284] dhcp4 (eth0): state changed new lease, address=38.102.83.224
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9288] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9320] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9327] device (eth1): disconnecting for new activation request.
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9328] audit: op="connection-activate" uuid="f3e90ac0-ed6a-5434-b062-a53261128ad5" name="ci-private-network" pid=47654 uid=0 result="success"
Oct  8 05:38:10 np0005475493 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9343] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9466] device (eth1): Activation: starting connection 'ci-private-network' (f3e90ac0-ed6a-5434-b062-a53261128ad5)
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9482] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9486] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9491] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9493] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47654 uid=0 result="success"
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9494] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9496] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9497] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9499] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9501] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9502] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9506] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9513] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9518] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9524] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9530] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9534] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9540] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9544] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9549] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct  8 05:38:10 np0005475493 kernel: br-ex: entered promiscuous mode
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9554] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9559] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9564] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9570] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9574] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9578] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct  8 05:38:10 np0005475493 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9588] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9592] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9649] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9650] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9656] device (eth1): Activation: successful, device activated.
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9669] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct  8 05:38:10 np0005475493 kernel: vlan22: entered promiscuous mode
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9692] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 systemd-udevd[47660]: Network interface NamePolicy= disabled on kernel command line.
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9725] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9727] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9731] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  8 05:38:10 np0005475493 kernel: vlan23: entered promiscuous mode
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9810] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct  8 05:38:10 np0005475493 kernel: vlan20: entered promiscuous mode
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9832] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 systemd-udevd[47767]: Network interface NamePolicy= disabled on kernel command line.
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9849] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9854] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9866] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  8 05:38:10 np0005475493 kernel: vlan21: entered promiscuous mode
Oct  8 05:38:10 np0005475493 systemd-udevd[47659]: Network interface NamePolicy= disabled on kernel command line.
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9895] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9919] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9996] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct  8 05:38:10 np0005475493 NetworkManager[44872]: <info>  [1759916290.9997] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  8 05:38:11 np0005475493 NetworkManager[44872]: <info>  [1759916291.0004] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct  8 05:38:11 np0005475493 NetworkManager[44872]: <info>  [1759916291.0008] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  8 05:38:11 np0005475493 NetworkManager[44872]: <info>  [1759916291.0013] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  8 05:38:11 np0005475493 NetworkManager[44872]: <info>  [1759916291.0044] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  8 05:38:11 np0005475493 NetworkManager[44872]: <info>  [1759916291.0051] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  8 05:38:11 np0005475493 NetworkManager[44872]: <info>  [1759916291.0090] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  8 05:38:11 np0005475493 NetworkManager[44872]: <info>  [1759916291.0092] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  8 05:38:11 np0005475493 NetworkManager[44872]: <info>  [1759916291.0094] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  8 05:38:11 np0005475493 NetworkManager[44872]: <info>  [1759916291.0100] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  8 05:38:11 np0005475493 NetworkManager[44872]: <info>  [1759916291.0105] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  8 05:38:11 np0005475493 NetworkManager[44872]: <info>  [1759916291.0113] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  8 05:38:12 np0005475493 NetworkManager[44872]: <info>  [1759916292.1195] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47654 uid=0 result="success"
Oct  8 05:38:12 np0005475493 NetworkManager[44872]: <info>  [1759916292.2780] checkpoint[0x5577884ab950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct  8 05:38:12 np0005475493 NetworkManager[44872]: <info>  [1759916292.2782] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47654 uid=0 result="success"
Oct  8 05:38:12 np0005475493 NetworkManager[44872]: <info>  [1759916292.5729] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47654 uid=0 result="success"
Oct  8 05:38:12 np0005475493 NetworkManager[44872]: <info>  [1759916292.5738] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47654 uid=0 result="success"
Oct  8 05:38:12 np0005475493 NetworkManager[44872]: <info>  [1759916292.7873] audit: op="networking-control" arg="global-dns-configuration" pid=47654 uid=0 result="success"
Oct  8 05:38:12 np0005475493 NetworkManager[44872]: <info>  [1759916292.7901] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct  8 05:38:12 np0005475493 NetworkManager[44872]: <info>  [1759916292.7934] audit: op="networking-control" arg="global-dns-configuration" pid=47654 uid=0 result="success"
Oct  8 05:38:12 np0005475493 NetworkManager[44872]: <info>  [1759916292.7965] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47654 uid=0 result="success"
Oct  8 05:38:12 np0005475493 NetworkManager[44872]: <info>  [1759916292.9147] checkpoint[0x5577884aba20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct  8 05:38:12 np0005475493 NetworkManager[44872]: <info>  [1759916292.9152] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47654 uid=0 result="success"
Oct  8 05:38:12 np0005475493 python3.9[48014]: ansible-ansible.legacy.async_status Invoked with jid=j124972591745.47648 mode=status _async_dir=/root/.ansible_async
Oct  8 05:38:12 np0005475493 ansible-async_wrapper.py[47652]: Module complete (47652)
Oct  8 05:38:14 np0005475493 ansible-async_wrapper.py[47651]: Done in kid B.
Oct  8 05:38:15 np0005475493 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  8 05:38:16 np0005475493 python3.9[48121]: ansible-ansible.legacy.async_status Invoked with jid=j124972591745.47648 mode=status _async_dir=/root/.ansible_async
Oct  8 05:38:17 np0005475493 python3.9[48220]: ansible-ansible.legacy.async_status Invoked with jid=j124972591745.47648 mode=cleanup _async_dir=/root/.ansible_async
Oct  8 05:38:18 np0005475493 python3.9[48372]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:38:18 np0005475493 python3.9[48495]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759916297.5545144-926-136862017873719/.source.returncode _original_basename=.2k2s838c follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:38:19 np0005475493 python3.9[48647]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:38:19 np0005475493 python3.9[48771]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759916299.0006137-974-21997374334111/.source.cfg _original_basename=.zjz3s1nd follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:38:20 np0005475493 python3.9[48923]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  8 05:38:20 np0005475493 systemd[1]: Reloading Network Manager...
Oct  8 05:38:20 np0005475493 NetworkManager[44872]: <info>  [1759916300.9157] audit: op="reload" arg="0" pid=48927 uid=0 result="success"
Oct  8 05:38:20 np0005475493 NetworkManager[44872]: <info>  [1759916300.9163] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct  8 05:38:20 np0005475493 systemd[1]: Reloaded Network Manager.
Oct  8 05:38:21 np0005475493 systemd[1]: session-10.scope: Deactivated successfully.
Oct  8 05:38:21 np0005475493 systemd[1]: session-10.scope: Consumed 47.223s CPU time.
Oct  8 05:38:21 np0005475493 systemd-logind[798]: Session 10 logged out. Waiting for processes to exit.
Oct  8 05:38:21 np0005475493 systemd-logind[798]: Removed session 10.
Oct  8 05:38:27 np0005475493 systemd-logind[798]: New session 11 of user zuul.
Oct  8 05:38:27 np0005475493 systemd[1]: Started Session 11 of User zuul.
Oct  8 05:38:28 np0005475493 python3.9[49111]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:38:29 np0005475493 python3.9[49266]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  8 05:38:30 np0005475493 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  8 05:38:31 np0005475493 python3.9[49459]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:38:31 np0005475493 systemd[1]: session-11.scope: Deactivated successfully.
Oct  8 05:38:31 np0005475493 systemd[1]: session-11.scope: Consumed 2.304s CPU time.
Oct  8 05:38:31 np0005475493 systemd-logind[798]: Session 11 logged out. Waiting for processes to exit.
Oct  8 05:38:31 np0005475493 systemd-logind[798]: Removed session 11.
Oct  8 05:38:36 np0005475493 systemd-logind[798]: New session 12 of user zuul.
Oct  8 05:38:36 np0005475493 systemd[1]: Started Session 12 of User zuul.
Oct  8 05:38:37 np0005475493 python3.9[49641]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:38:38 np0005475493 python3.9[49795]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:38:39 np0005475493 python3.9[49952]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  8 05:38:40 np0005475493 python3.9[50036]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  8 05:38:42 np0005475493 python3.9[50190]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  8 05:38:43 np0005475493 python3.9[50385]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:38:44 np0005475493 python3.9[50537]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:38:44 np0005475493 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck1012587662-merged.mount: Deactivated successfully.
Oct  8 05:38:44 np0005475493 podman[50538]: 2025-10-08 09:38:44.890967057 +0000 UTC m=+0.045581011 system refresh
Oct  8 05:38:45 np0005475493 python3.9[50700]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:38:45 np0005475493 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  8 05:38:46 np0005475493 python3.9[50823]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916325.1675496-197-112371516454510/.source.json follow=False _original_basename=podman_network_config.j2 checksum=51cae438ebb1fc11044e40e0585a1b8c3a148f17 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:38:47 np0005475493 python3.9[50975]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:38:47 np0005475493 python3.9[51098]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759916326.82971-242-185420978338250/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:38:48 np0005475493 python3.9[51250]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:38:49 np0005475493 python3.9[51402]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:38:49 np0005475493 python3.9[51554]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:38:50 np0005475493 python3.9[51706]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:38:51 np0005475493 python3.9[51858]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  8 05:38:53 np0005475493 python3.9[52011]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:38:54 np0005475493 python3.9[52165]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:38:55 np0005475493 python3.9[52317]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:38:56 np0005475493 python3.9[52469]: ansible-service_facts Invoked
Oct  8 05:38:56 np0005475493 network[52486]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  8 05:38:56 np0005475493 network[52487]: 'network-scripts' will be removed from distribution in near future.
Oct  8 05:38:56 np0005475493 network[52488]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  8 05:39:03 np0005475493 python3.9[52942]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  8 05:39:06 np0005475493 python3.9[53095]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct  8 05:39:07 np0005475493 python3.9[53247]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:39:08 np0005475493 python3.9[53372]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759916347.4793897-638-67779334586111/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:39:09 np0005475493 python3.9[53526]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:39:10 np0005475493 python3.9[53651]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759916348.9954884-683-59412474072056/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:39:11 np0005475493 python3.9[53805]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:39:13 np0005475493 python3.9[53959]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  8 05:39:14 np0005475493 python3.9[54043]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:39:16 np0005475493 python3.9[54197]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  8 05:39:17 np0005475493 python3.9[54281]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  8 05:39:17 np0005475493 systemd[1]: Stopping NTP client/server...
Oct  8 05:39:17 np0005475493 chronyd[791]: chronyd exiting
Oct  8 05:39:17 np0005475493 systemd[1]: chronyd.service: Deactivated successfully.
Oct  8 05:39:17 np0005475493 systemd[1]: Stopped NTP client/server.
Oct  8 05:39:17 np0005475493 systemd[1]: Starting NTP client/server...
Oct  8 05:39:17 np0005475493 chronyd[54290]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  8 05:39:17 np0005475493 chronyd[54290]: Frequency -32.293 +/- 0.081 ppm read from /var/lib/chrony/drift
Oct  8 05:39:17 np0005475493 chronyd[54290]: Loaded seccomp filter (level 2)
Oct  8 05:39:17 np0005475493 systemd[1]: Started NTP client/server.
Oct  8 05:39:18 np0005475493 systemd[1]: session-12.scope: Deactivated successfully.
Oct  8 05:39:18 np0005475493 systemd[1]: session-12.scope: Consumed 24.121s CPU time.
Oct  8 05:39:18 np0005475493 systemd-logind[798]: Session 12 logged out. Waiting for processes to exit.
Oct  8 05:39:18 np0005475493 systemd-logind[798]: Removed session 12.
Oct  8 05:39:24 np0005475493 systemd-logind[798]: New session 13 of user zuul.
Oct  8 05:39:24 np0005475493 systemd[1]: Started Session 13 of User zuul.
Oct  8 05:39:25 np0005475493 python3.9[54471]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:39:26 np0005475493 python3.9[54623]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:39:26 np0005475493 python3.9[54746]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759916365.3358562-62-96549051267412/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:39:27 np0005475493 systemd[1]: session-13.scope: Deactivated successfully.
Oct  8 05:39:27 np0005475493 systemd[1]: session-13.scope: Consumed 1.687s CPU time.
Oct  8 05:39:27 np0005475493 systemd-logind[798]: Session 13 logged out. Waiting for processes to exit.
Oct  8 05:39:27 np0005475493 systemd-logind[798]: Removed session 13.
Oct  8 05:39:32 np0005475493 systemd-logind[798]: New session 14 of user zuul.
Oct  8 05:39:32 np0005475493 systemd[1]: Started Session 14 of User zuul.
Oct  8 05:39:33 np0005475493 python3.9[54924]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:39:34 np0005475493 python3.9[55080]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:39:35 np0005475493 python3.9[55255]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:39:36 np0005475493 python3.9[55378]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1759916374.8710663-83-186075391440293/.source.json _original_basename=.sld731j1 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:39:37 np0005475493 python3.9[55530]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:39:37 np0005475493 python3.9[55653]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759916376.782785-152-264008644138162/.source _original_basename=.rzdvx5mn follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:39:38 np0005475493 python3.9[55805]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:39:39 np0005475493 python3.9[55957]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:39:39 np0005475493 python3.9[56080]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759916378.905387-224-261350587608458/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:39:40 np0005475493 python3.9[56232]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:39:41 np0005475493 python3.9[56355]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759916379.988106-224-194189057684414/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:39:41 np0005475493 python3.9[56507]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:39:42 np0005475493 python3.9[56659]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:39:42 np0005475493 python3.9[56782]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916382.0254693-335-64124465464662/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:39:43 np0005475493 python3.9[56934]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:39:44 np0005475493 python3.9[57057]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916383.30562-380-239234759531957/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:39:45 np0005475493 python3.9[57209]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:39:45 np0005475493 systemd[1]: Reloading.
Oct  8 05:39:45 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:39:45 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:39:45 np0005475493 systemd[1]: Reloading.
Oct  8 05:39:46 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:39:46 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:39:46 np0005475493 systemd[1]: Starting EDPM Container Shutdown...
Oct  8 05:39:46 np0005475493 systemd[1]: Finished EDPM Container Shutdown.
Oct  8 05:39:46 np0005475493 python3.9[57437]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:39:47 np0005475493 python3.9[57560]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916386.4359822-449-78234202941183/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:39:48 np0005475493 python3.9[57712]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:39:48 np0005475493 python3.9[57835]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916387.6816137-494-77405419769508/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:39:49 np0005475493 python3.9[57987]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:39:49 np0005475493 systemd[1]: Reloading.
Oct  8 05:39:49 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:39:49 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:39:49 np0005475493 systemd[1]: Reloading.
Oct  8 05:39:49 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:39:49 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:39:49 np0005475493 systemd[1]: Starting Create netns directory...
Oct  8 05:39:49 np0005475493 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  8 05:39:49 np0005475493 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  8 05:39:49 np0005475493 systemd[1]: Finished Create netns directory.
Oct  8 05:39:50 np0005475493 python3.9[58213]: ansible-ansible.builtin.service_facts Invoked
Oct  8 05:39:50 np0005475493 network[58230]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  8 05:39:50 np0005475493 network[58231]: 'network-scripts' will be removed from distribution in near future.
Oct  8 05:39:50 np0005475493 network[58232]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  8 05:39:54 np0005475493 python3.9[58496]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:39:54 np0005475493 systemd[1]: Reloading.
Oct  8 05:39:54 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:39:54 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:39:54 np0005475493 systemd[1]: Stopping IPv4 firewall with iptables...
Oct  8 05:39:55 np0005475493 iptables.init[58536]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct  8 05:39:55 np0005475493 iptables.init[58536]: iptables: Flushing firewall rules: [  OK  ]
Oct  8 05:39:55 np0005475493 systemd[1]: iptables.service: Deactivated successfully.
Oct  8 05:39:55 np0005475493 systemd[1]: Stopped IPv4 firewall with iptables.
Oct  8 05:39:55 np0005475493 python3.9[58733]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:39:57 np0005475493 python3.9[58887]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:39:57 np0005475493 systemd[1]: Reloading.
Oct  8 05:39:57 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:39:57 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:39:57 np0005475493 systemd[1]: Starting Netfilter Tables...
Oct  8 05:39:57 np0005475493 systemd[1]: Finished Netfilter Tables.
Oct  8 05:39:58 np0005475493 python3.9[59079]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:39:59 np0005475493 python3.9[59232]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:39:59 np0005475493 python3.9[59357]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759916398.9007385-701-11858153407798/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:40:00 np0005475493 python3.9[59508]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  8 05:40:26 np0005475493 systemd[1]: session-14.scope: Deactivated successfully.
Oct  8 05:40:26 np0005475493 systemd[1]: session-14.scope: Consumed 18.957s CPU time.
Oct  8 05:40:26 np0005475493 systemd-logind[798]: Session 14 logged out. Waiting for processes to exit.
Oct  8 05:40:26 np0005475493 systemd-logind[798]: Removed session 14.
Oct  8 05:40:38 np0005475493 systemd-logind[798]: New session 15 of user zuul.
Oct  8 05:40:38 np0005475493 systemd[1]: Started Session 15 of User zuul.
Oct  8 05:40:39 np0005475493 python3.9[59703]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:40:41 np0005475493 python3.9[59859]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:40:42 np0005475493 python3.9[60034]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:40:42 np0005475493 python3.9[60112]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.q5bnxeta recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:40:43 np0005475493 python3.9[60264]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:40:44 np0005475493 python3.9[60342]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.s9oazx91 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:40:44 np0005475493 python3.9[60494]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:40:45 np0005475493 python3.9[60646]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:40:45 np0005475493 python3.9[60724]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:40:46 np0005475493 python3.9[60876]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:40:47 np0005475493 python3.9[60954]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:40:47 np0005475493 python3.9[61106]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:40:48 np0005475493 python3.9[61258]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:40:49 np0005475493 python3.9[61336]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:40:49 np0005475493 python3.9[61488]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:40:50 np0005475493 python3.9[61566]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:40:51 np0005475493 python3.9[61718]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:40:51 np0005475493 systemd[1]: Reloading.
Oct  8 05:40:51 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:40:51 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:40:52 np0005475493 python3.9[61908]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:40:53 np0005475493 python3.9[61986]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:40:53 np0005475493 python3.9[62138]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:40:54 np0005475493 python3.9[62216]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:40:55 np0005475493 python3.9[62368]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:40:55 np0005475493 systemd[1]: Reloading.
Oct  8 05:40:55 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:40:55 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:40:55 np0005475493 systemd[1]: Starting Create netns directory...
Oct  8 05:40:55 np0005475493 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  8 05:40:55 np0005475493 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  8 05:40:55 np0005475493 systemd[1]: Finished Create netns directory.
Oct  8 05:40:56 np0005475493 python3.9[62559]: ansible-ansible.builtin.service_facts Invoked
Oct  8 05:40:56 np0005475493 network[62576]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  8 05:40:56 np0005475493 network[62577]: 'network-scripts' will be removed from distribution in near future.
Oct  8 05:40:56 np0005475493 network[62578]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  8 05:41:00 np0005475493 python3.9[62841]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:41:00 np0005475493 python3.9[62919]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:41:01 np0005475493 python3.9[63071]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:41:02 np0005475493 python3.9[63223]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:41:03 np0005475493 python3.9[63346]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916462.0393167-608-67828803085351/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:41:04 np0005475493 python3.9[63498]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  8 05:41:04 np0005475493 systemd[1]: Starting Time & Date Service...
Oct  8 05:41:04 np0005475493 systemd[1]: Started Time & Date Service.
Oct  8 05:41:05 np0005475493 python3.9[63654]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:41:06 np0005475493 python3.9[63806]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:41:06 np0005475493 python3.9[63929]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759916465.686154-713-67116703037697/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:41:07 np0005475493 python3.9[64081]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:41:07 np0005475493 python3.9[64204]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759916467.0128467-758-123189238895354/.source.yaml _original_basename=.462ryte_ follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:41:08 np0005475493 python3.9[64356]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:41:09 np0005475493 python3.9[64479]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916468.2791557-803-89898809109499/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:41:10 np0005475493 python3.9[64631]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:41:10 np0005475493 python3.9[64784]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:41:11 np0005475493 python3[64937]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  8 05:41:12 np0005475493 python3.9[65089]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:41:13 np0005475493 python3.9[65212]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916472.1359742-920-84569827939591/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:41:14 np0005475493 python3.9[65364]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:41:14 np0005475493 python3.9[65487]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916473.4992251-965-236898675351959/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:41:15 np0005475493 python3.9[65639]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:41:15 np0005475493 python3.9[65762]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916474.8748808-1010-4277159368343/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:41:16 np0005475493 python3.9[65914]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:41:17 np0005475493 python3.9[66038]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916476.2035341-1055-215348333513182/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:41:18 np0005475493 python3.9[66190]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:41:19 np0005475493 python3.9[66313]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759916477.8239245-1100-237685167320183/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:41:19 np0005475493 python3.9[66465]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:41:20 np0005475493 python3.9[66617]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:41:21 np0005475493 python3.9[66776]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:41:22 np0005475493 python3.9[66929]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:41:23 np0005475493 python3.9[67081]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:41:24 np0005475493 python3.9[67233]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  8 05:41:24 np0005475493 python3.9[67386]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  8 05:41:25 np0005475493 systemd[1]: session-15.scope: Deactivated successfully.
Oct  8 05:41:25 np0005475493 systemd[1]: session-15.scope: Consumed 30.783s CPU time.
Oct  8 05:41:25 np0005475493 systemd-logind[798]: Session 15 logged out. Waiting for processes to exit.
Oct  8 05:41:25 np0005475493 systemd-logind[798]: Removed session 15.
Oct  8 05:41:27 np0005475493 chronyd[54290]: Selected source 23.133.168.247 (pool.ntp.org)
Oct  8 05:41:30 np0005475493 systemd-logind[798]: New session 16 of user zuul.
Oct  8 05:41:30 np0005475493 systemd[1]: Started Session 16 of User zuul.
Oct  8 05:41:31 np0005475493 python3.9[67567]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct  8 05:41:32 np0005475493 python3.9[67719]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:41:33 np0005475493 python3.9[67871]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:41:34 np0005475493 python3.9[68023]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYQPNjF86l7L2Hj2/ras4UwWV1W/v43YSKx2wyuHDdieMiPaKbrXfDkjmyzUBERrbiTo1QPGQAMAmA2ykBglPN8r/+0SzTmZFPysM5MwJdoYFoZLOFzs9ldQJxEusbWvZnvF+I9UgftR9Kc0etIrQ6xgLbAtGZNGqj5b2kDFCC3J7RJB10JjuqkZ7faqGp+JLC/txEe9rDOAOpOpa885Sx+ZK+5P8OmEbpqHH3vL1O9we9lyRIs2Y/RpIrncEKyaA84WKimjvp832GDFqVGlFklY8lsH31+AUKXfk65cwhnczZO7DTB1/+0QUWhiy+uUUKLdJ1C3AFfHNBBH0WWHolNsPiYjSaNrUIgxXyRLkGtLeTAtEa9LNniw8KKCXI/jptXVVqyfHGOFIzo11NDDSTeCPpVG2MrjX9vJZknGeShJLavvHzVmc1N/zNpgq0Rr0FEyFZL384e8WgnmTY1lBf7tAPdMyIaNEJgEE4MobwqVDSwMmgWKmKoOeY5jsWNlM=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHzclsFPuApUw4nYRrZrI5lJm2aKty4lBzS+387uCINA#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCmuS8ms5fq9IWCpSG062zv6KqUIHSk9g+RlcFiU/nKSB1OMQ56HhCeuGAOEbiyfVsMqC143W9W+Q6X1JDoRkcg=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCH7J4/vrAjqY7b3+xDoxlOrkvqhtdMtNCRu8feksOJjh2Lg2Yk5a4TpRFHHcUew6Or+BSrCAe5KLIJookdMX3AnHBTeYgFVrph2Ke0jsZhtIDdYFPya4HaYgVScxezyYjpFJsOgHIasA47X1Ai7KtSHamdGUMHvyRPFaMroDQGOH5uNA58Pr0jAvA9/p32JhzVhvFTNhdp5AZuuf53LCOoAJPpvxAfhZJVwv0zpQu1qJ2MQ4F6PjmLmpJe9IFedhTbswP4+A8raCmSvJK/X3zbL6A5C78i72YF0dVlX4E5Jgq2BymgfJXA2vRrB7WzfFXN/KCT+A6KjshRy8vEZTlewfHk3bMt+IjAgRaPsvV2gwOQb0lhzfUX2RkPxHTTunUAUf1PJwBTKah0plZAQoGQce+8MWTqKP842KIoZPO7/LQQZR21apoIRIEt1OtR3pITkULZqmoYaZKqVzPCyoagXj2v0W4E//8slRvaC4n2qfMRwvp2VR0mSv9qwMeqnm0=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMt6YRNNCvMAUwHQzPKNq18k03sF+qAP+8fg1vdKmMsQ#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN1LMOBquYaNyOmBNhqWyrm3Ot0C+prylWlOCYwa7IIp3WZH4GHwVhjD6VAwSa/KvI01xKiiJwO/WJ4zgAnMAiM=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCp3Vp6dX4ruCK781x4GIhtAtcJdT75tsPxH3O/YwMPa1JuQj17BT+IZbu0qvi56CLtWm5GwO9cF5N1u+ZpYWIwNbEJlz4q4LeJud7OFwwvwDTdM2fZylZt2dEtwqbmDJUsJxwcLQshtmSxpRR5Z53dCJAMTZiKGF/MiJrVkc7A2PfxMnLH568W9poUGj9jUYetHoRmwKl9hes+OQRljbjUi8gLpseivGxW9IAewXRhJi0ybLNDnQM0iSkdQqaTVD7laQKxpynfO1a0b7U6oyFRdyTqMJqyDKe8Vx+D1esV9oZKn7UEtj+WGUAv3StaLzrk3fjhi4XePCs0Ao1s/B1MPZCcM0Po5BdHAHhf4CbUSRS+oaAS7KaaWkWTKLTKEDWS6DjX6KUR9hUyLQ54IMYu17UP6JclJnH5c9FmUQls07pus/CkhX0IIgOTinLYeOJSdBsKA9JUrnQzXKMAwzjKL18kG8OZ+Yaf7msme1EVikR9ljtRB88k+DtapF5wub8=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBDnMNJEcPeKIHMEAdXUabsWNwdNGhiYyZLatE1eeBqY#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLDW7MDD+6+vPlFKWCI8yHUVjDpLwcAatqV8Xhxm53MJMkyP9vCai5lIMwJluZIDUkA83WhSi06EgMc1afHFONA=#012 create=True mode=0644 path=/tmp/ansible.d0hsaq01 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:41:34 np0005475493 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  8 05:41:35 np0005475493 python3.9[68177]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.d0hsaq01' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:41:36 np0005475493 python3.9[68331]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.d0hsaq01 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:41:36 np0005475493 systemd[1]: session-16.scope: Deactivated successfully.
Oct  8 05:41:36 np0005475493 systemd[1]: session-16.scope: Consumed 3.652s CPU time.
Oct  8 05:41:36 np0005475493 systemd-logind[798]: Session 16 logged out. Waiting for processes to exit.
Oct  8 05:41:36 np0005475493 systemd-logind[798]: Removed session 16.
Oct  8 05:41:41 np0005475493 systemd-logind[798]: New session 17 of user zuul.
Oct  8 05:41:41 np0005475493 systemd[1]: Started Session 17 of User zuul.
Oct  8 05:41:42 np0005475493 python3.9[68509]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:41:43 np0005475493 python3.9[68665]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  8 05:41:44 np0005475493 python3.9[68819]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  8 05:41:45 np0005475493 python3.9[68972]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:41:46 np0005475493 python3.9[69125]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:41:47 np0005475493 python3.9[69279]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:41:48 np0005475493 python3.9[69434]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:41:48 np0005475493 systemd[1]: session-17.scope: Deactivated successfully.
Oct  8 05:41:48 np0005475493 systemd[1]: session-17.scope: Consumed 4.334s CPU time.
Oct  8 05:41:48 np0005475493 systemd-logind[798]: Session 17 logged out. Waiting for processes to exit.
Oct  8 05:41:48 np0005475493 systemd-logind[798]: Removed session 17.
Oct  8 05:41:53 np0005475493 systemd-logind[798]: New session 18 of user zuul.
Oct  8 05:41:53 np0005475493 systemd[1]: Started Session 18 of User zuul.
Oct  8 05:41:55 np0005475493 python3.9[69613]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:41:56 np0005475493 python3.9[69769]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  8 05:41:57 np0005475493 python3.9[69853]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  8 05:41:59 np0005475493 python3.9[70004]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:42:00 np0005475493 python3.9[70155]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  8 05:42:01 np0005475493 python3.9[70305]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:42:01 np0005475493 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  8 05:42:02 np0005475493 python3.9[70456]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:42:02 np0005475493 systemd[1]: session-18.scope: Deactivated successfully.
Oct  8 05:42:02 np0005475493 systemd[1]: session-18.scope: Consumed 5.856s CPU time.
Oct  8 05:42:02 np0005475493 systemd-logind[798]: Session 18 logged out. Waiting for processes to exit.
Oct  8 05:42:02 np0005475493 systemd-logind[798]: Removed session 18.
Oct  8 05:42:11 np0005475493 systemd-logind[798]: New session 19 of user zuul.
Oct  8 05:42:11 np0005475493 systemd[1]: Started Session 19 of User zuul.
Oct  8 05:42:17 np0005475493 python3[71222]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:42:19 np0005475493 python3[71317]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  8 05:42:20 np0005475493 python3[71344]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:42:21 np0005475493 python3[71370]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:42:21 np0005475493 kernel: loop: module loaded
Oct  8 05:42:21 np0005475493 kernel: loop3: detected capacity change from 0 to 41943040
Oct  8 05:42:21 np0005475493 python3[71405]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:42:21 np0005475493 lvm[71408]: PV /dev/loop3 not used.
Oct  8 05:42:21 np0005475493 lvm[71417]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 05:42:21 np0005475493 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct  8 05:42:21 np0005475493 lvm[71419]:  1 logical volume(s) in volume group "ceph_vg0" now active
Oct  8 05:42:21 np0005475493 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct  8 05:42:22 np0005475493 python3[71497]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  8 05:42:22 np0005475493 python3[71570]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759916542.1615882-33332-68038896034014/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:42:23 np0005475493 python3[71620]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:42:23 np0005475493 systemd[1]: Reloading.
Oct  8 05:42:23 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:42:23 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:42:24 np0005475493 systemd[1]: Starting Ceph OSD losetup...
Oct  8 05:42:24 np0005475493 bash[71661]: /dev/loop3: [64513]:4349020 (/var/lib/ceph-osd-0.img)
Oct  8 05:42:24 np0005475493 systemd[1]: Finished Ceph OSD losetup.
Oct  8 05:42:24 np0005475493 lvm[71662]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 05:42:24 np0005475493 lvm[71662]: VG ceph_vg0 finished
Oct  8 05:42:26 np0005475493 python3[71686]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:42:29 np0005475493 python3[71779]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  8 05:42:31 np0005475493 python3[71836]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  8 05:42:35 np0005475493 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  8 05:42:35 np0005475493 systemd[1]: Starting man-db-cache-update.service...
Oct  8 05:42:35 np0005475493 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  8 05:42:35 np0005475493 systemd[1]: Finished man-db-cache-update.service.
Oct  8 05:42:35 np0005475493 systemd[1]: run-r314bd02ad92941de879a0d133d4ada9f.service: Deactivated successfully.
Oct  8 05:42:35 np0005475493 python3[71955]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:42:36 np0005475493 python3[71983]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:42:36 np0005475493 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  8 05:42:36 np0005475493 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  8 05:42:37 np0005475493 python3[72047]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:42:37 np0005475493 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  8 05:42:37 np0005475493 python3[72073]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:42:38 np0005475493 python3[72151]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  8 05:42:38 np0005475493 python3[72224]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759916557.8214695-33524-277441041685286/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:42:39 np0005475493 python3[72326]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  8 05:42:39 np0005475493 python3[72399]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759916559.0681355-33542-19457543483014/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:42:40 np0005475493 python3[72449]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:42:40 np0005475493 python3[72477]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:42:40 np0005475493 python3[72505]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:42:41 np0005475493 python3[72533]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 787292cc-8154-50c4-9e00-e9be3e817149 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:42:41 np0005475493 systemd[1]: Created slice User Slice of UID 42477.
Oct  8 05:42:41 np0005475493 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct  8 05:42:41 np0005475493 systemd-logind[798]: New session 20 of user ceph-admin.
Oct  8 05:42:41 np0005475493 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct  8 05:42:41 np0005475493 systemd[1]: Starting User Manager for UID 42477...
Oct  8 05:42:41 np0005475493 systemd[72541]: Queued start job for default target Main User Target.
Oct  8 05:42:41 np0005475493 systemd[72541]: Created slice User Application Slice.
Oct  8 05:42:41 np0005475493 systemd[72541]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  8 05:42:41 np0005475493 systemd[72541]: Started Daily Cleanup of User's Temporary Directories.
Oct  8 05:42:41 np0005475493 systemd[72541]: Reached target Paths.
Oct  8 05:42:41 np0005475493 systemd[72541]: Reached target Timers.
Oct  8 05:42:41 np0005475493 systemd[72541]: Starting D-Bus User Message Bus Socket...
Oct  8 05:42:41 np0005475493 systemd[72541]: Starting Create User's Volatile Files and Directories...
Oct  8 05:42:41 np0005475493 systemd[72541]: Listening on D-Bus User Message Bus Socket.
Oct  8 05:42:41 np0005475493 systemd[72541]: Reached target Sockets.
Oct  8 05:42:41 np0005475493 systemd[72541]: Finished Create User's Volatile Files and Directories.
Oct  8 05:42:41 np0005475493 systemd[72541]: Reached target Basic System.
Oct  8 05:42:41 np0005475493 systemd[72541]: Reached target Main User Target.
Oct  8 05:42:41 np0005475493 systemd[72541]: Startup finished in 112ms.
Oct  8 05:42:41 np0005475493 systemd[1]: Started User Manager for UID 42477.
Oct  8 05:42:41 np0005475493 systemd[1]: Started Session 20 of User ceph-admin.
Oct  8 05:42:41 np0005475493 systemd[1]: session-20.scope: Deactivated successfully.
Oct  8 05:42:41 np0005475493 systemd-logind[798]: Session 20 logged out. Waiting for processes to exit.
Oct  8 05:42:41 np0005475493 systemd-logind[798]: Removed session 20.
Oct  8 05:42:41 np0005475493 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  8 05:42:41 np0005475493 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  8 05:42:44 np0005475493 systemd[1]: var-lib-containers-storage-overlay-compat381809274-lower\x2dmapped.mount: Deactivated successfully.
Oct  8 05:42:51 np0005475493 systemd[1]: Stopping User Manager for UID 42477...
Oct  8 05:42:51 np0005475493 systemd[72541]: Activating special unit Exit the Session...
Oct  8 05:42:51 np0005475493 systemd[72541]: Stopped target Main User Target.
Oct  8 05:42:51 np0005475493 systemd[72541]: Stopped target Basic System.
Oct  8 05:42:51 np0005475493 systemd[72541]: Stopped target Paths.
Oct  8 05:42:51 np0005475493 systemd[72541]: Stopped target Sockets.
Oct  8 05:42:51 np0005475493 systemd[72541]: Stopped target Timers.
Oct  8 05:42:51 np0005475493 systemd[72541]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  8 05:42:51 np0005475493 systemd[72541]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  8 05:42:51 np0005475493 systemd[72541]: Closed D-Bus User Message Bus Socket.
Oct  8 05:42:51 np0005475493 systemd[72541]: Stopped Create User's Volatile Files and Directories.
Oct  8 05:42:51 np0005475493 systemd[72541]: Removed slice User Application Slice.
Oct  8 05:42:51 np0005475493 systemd[72541]: Reached target Shutdown.
Oct  8 05:42:51 np0005475493 systemd[72541]: Finished Exit the Session.
Oct  8 05:42:51 np0005475493 systemd[72541]: Reached target Exit the Session.
Oct  8 05:42:51 np0005475493 systemd[1]: user@42477.service: Deactivated successfully.
Oct  8 05:42:51 np0005475493 systemd[1]: Stopped User Manager for UID 42477.
Oct  8 05:42:51 np0005475493 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct  8 05:42:51 np0005475493 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct  8 05:42:51 np0005475493 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct  8 05:42:51 np0005475493 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct  8 05:42:51 np0005475493 systemd[1]: Removed slice User Slice of UID 42477.
Oct  8 05:42:58 np0005475493 podman[72633]: 2025-10-08 09:42:58.126710287 +0000 UTC m=+16.272952782 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:42:58 np0005475493 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  8 05:42:58 np0005475493 podman[72693]: 2025-10-08 09:42:58.186869667 +0000 UTC m=+0.039256076 container create b6f9d463dca56ed09ccd9916502cdbaea759f831681458895555f22c4faab38e (image=quay.io/ceph/ceph:v19, name=gallant_galois, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:42:58 np0005475493 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck2342328274-merged.mount: Deactivated successfully.
Oct  8 05:42:58 np0005475493 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct  8 05:42:58 np0005475493 systemd[1]: Started libpod-conmon-b6f9d463dca56ed09ccd9916502cdbaea759f831681458895555f22c4faab38e.scope.
Oct  8 05:42:58 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:42:58 np0005475493 podman[72693]: 2025-10-08 09:42:58.166287486 +0000 UTC m=+0.018673945 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:42:58 np0005475493 podman[72693]: 2025-10-08 09:42:58.270608393 +0000 UTC m=+0.122994822 container init b6f9d463dca56ed09ccd9916502cdbaea759f831681458895555f22c4faab38e (image=quay.io/ceph/ceph:v19, name=gallant_galois, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:42:58 np0005475493 podman[72693]: 2025-10-08 09:42:58.276875208 +0000 UTC m=+0.129261627 container start b6f9d463dca56ed09ccd9916502cdbaea759f831681458895555f22c4faab38e (image=quay.io/ceph/ceph:v19, name=gallant_galois, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  8 05:42:58 np0005475493 podman[72693]: 2025-10-08 09:42:58.280265648 +0000 UTC m=+0.132652057 container attach b6f9d463dca56ed09ccd9916502cdbaea759f831681458895555f22c4faab38e (image=quay.io/ceph/ceph:v19, name=gallant_galois, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:42:58 np0005475493 gallant_galois[72709]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Oct  8 05:42:58 np0005475493 systemd[1]: libpod-b6f9d463dca56ed09ccd9916502cdbaea759f831681458895555f22c4faab38e.scope: Deactivated successfully.
Oct  8 05:42:58 np0005475493 podman[72693]: 2025-10-08 09:42:58.375388944 +0000 UTC m=+0.227775353 container died b6f9d463dca56ed09ccd9916502cdbaea759f831681458895555f22c4faab38e (image=quay.io/ceph/ceph:v19, name=gallant_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:42:58 np0005475493 podman[72693]: 2025-10-08 09:42:58.415657698 +0000 UTC m=+0.268044107 container remove b6f9d463dca56ed09ccd9916502cdbaea759f831681458895555f22c4faab38e (image=quay.io/ceph/ceph:v19, name=gallant_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:42:58 np0005475493 systemd[1]: libpod-conmon-b6f9d463dca56ed09ccd9916502cdbaea759f831681458895555f22c4faab38e.scope: Deactivated successfully.
Oct  8 05:42:58 np0005475493 podman[72726]: 2025-10-08 09:42:58.475611735 +0000 UTC m=+0.039678739 container create 7051260230ed919b857aca3de8de175e8600bbf0fb92ea027da729e8410e27e6 (image=quay.io/ceph/ceph:v19, name=vigorous_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:42:58 np0005475493 systemd[1]: Started libpod-conmon-7051260230ed919b857aca3de8de175e8600bbf0fb92ea027da729e8410e27e6.scope.
Oct  8 05:42:58 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:42:58 np0005475493 podman[72726]: 2025-10-08 09:42:58.542858957 +0000 UTC m=+0.106925951 container init 7051260230ed919b857aca3de8de175e8600bbf0fb92ea027da729e8410e27e6 (image=quay.io/ceph/ceph:v19, name=vigorous_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:42:58 np0005475493 podman[72726]: 2025-10-08 09:42:58.55122489 +0000 UTC m=+0.115291924 container start 7051260230ed919b857aca3de8de175e8600bbf0fb92ea027da729e8410e27e6 (image=quay.io/ceph/ceph:v19, name=vigorous_keldysh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:42:58 np0005475493 vigorous_keldysh[72743]: 167 167
Oct  8 05:42:58 np0005475493 podman[72726]: 2025-10-08 09:42:58.45785467 +0000 UTC m=+0.021921694 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:42:58 np0005475493 systemd[1]: libpod-7051260230ed919b857aca3de8de175e8600bbf0fb92ea027da729e8410e27e6.scope: Deactivated successfully.
Oct  8 05:42:58 np0005475493 podman[72726]: 2025-10-08 09:42:58.555058924 +0000 UTC m=+0.119125968 container attach 7051260230ed919b857aca3de8de175e8600bbf0fb92ea027da729e8410e27e6 (image=quay.io/ceph/ceph:v19, name=vigorous_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:42:58 np0005475493 podman[72726]: 2025-10-08 09:42:58.55572118 +0000 UTC m=+0.119788214 container died 7051260230ed919b857aca3de8de175e8600bbf0fb92ea027da729e8410e27e6 (image=quay.io/ceph/ceph:v19, name=vigorous_keldysh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct  8 05:42:58 np0005475493 podman[72726]: 2025-10-08 09:42:58.592937087 +0000 UTC m=+0.157004081 container remove 7051260230ed919b857aca3de8de175e8600bbf0fb92ea027da729e8410e27e6 (image=quay.io/ceph/ceph:v19, name=vigorous_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:42:58 np0005475493 systemd[1]: libpod-conmon-7051260230ed919b857aca3de8de175e8600bbf0fb92ea027da729e8410e27e6.scope: Deactivated successfully.
Oct  8 05:42:58 np0005475493 podman[72760]: 2025-10-08 09:42:58.643717634 +0000 UTC m=+0.032716869 container create f353d6b884c81c8747d9f9cd66a2f91571f579a91d214314d95d76343130c75e (image=quay.io/ceph/ceph:v19, name=hopeful_morse, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  8 05:42:58 np0005475493 systemd[1]: Started libpod-conmon-f353d6b884c81c8747d9f9cd66a2f91571f579a91d214314d95d76343130c75e.scope.
Oct  8 05:42:58 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:42:58 np0005475493 podman[72760]: 2025-10-08 09:42:58.698278084 +0000 UTC m=+0.087277349 container init f353d6b884c81c8747d9f9cd66a2f91571f579a91d214314d95d76343130c75e (image=quay.io/ceph/ceph:v19, name=hopeful_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:42:58 np0005475493 podman[72760]: 2025-10-08 09:42:58.704091665 +0000 UTC m=+0.093090890 container start f353d6b884c81c8747d9f9cd66a2f91571f579a91d214314d95d76343130c75e (image=quay.io/ceph/ceph:v19, name=hopeful_morse, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:42:58 np0005475493 podman[72760]: 2025-10-08 09:42:58.70689035 +0000 UTC m=+0.095889585 container attach f353d6b884c81c8747d9f9cd66a2f91571f579a91d214314d95d76343130c75e (image=quay.io/ceph/ceph:v19, name=hopeful_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  8 05:42:58 np0005475493 hopeful_morse[72776]: AQAiMuZoKm3wKhAArvS1ox2lkrw7anYpGWXX/g==
Oct  8 05:42:58 np0005475493 systemd[1]: libpod-f353d6b884c81c8747d9f9cd66a2f91571f579a91d214314d95d76343130c75e.scope: Deactivated successfully.
Oct  8 05:42:58 np0005475493 podman[72760]: 2025-10-08 09:42:58.722958251 +0000 UTC m=+0.111957486 container died f353d6b884c81c8747d9f9cd66a2f91571f579a91d214314d95d76343130c75e (image=quay.io/ceph/ceph:v19, name=hopeful_morse, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct  8 05:42:58 np0005475493 podman[72760]: 2025-10-08 09:42:58.629939852 +0000 UTC m=+0.018939107 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:42:58 np0005475493 podman[72760]: 2025-10-08 09:42:58.756638177 +0000 UTC m=+0.145637412 container remove f353d6b884c81c8747d9f9cd66a2f91571f579a91d214314d95d76343130c75e (image=quay.io/ceph/ceph:v19, name=hopeful_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:42:58 np0005475493 systemd[1]: libpod-conmon-f353d6b884c81c8747d9f9cd66a2f91571f579a91d214314d95d76343130c75e.scope: Deactivated successfully.
Oct  8 05:42:58 np0005475493 podman[72794]: 2025-10-08 09:42:58.818492061 +0000 UTC m=+0.039467718 container create aab2ed647a99cb2734b0655bcb5aafe0f91af0ebf8083d277d02d93aa3b66ff8 (image=quay.io/ceph/ceph:v19, name=ecstatic_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:42:58 np0005475493 systemd[1]: Started libpod-conmon-aab2ed647a99cb2734b0655bcb5aafe0f91af0ebf8083d277d02d93aa3b66ff8.scope.
Oct  8 05:42:58 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:42:58 np0005475493 podman[72794]: 2025-10-08 09:42:58.877432309 +0000 UTC m=+0.098407986 container init aab2ed647a99cb2734b0655bcb5aafe0f91af0ebf8083d277d02d93aa3b66ff8 (image=quay.io/ceph/ceph:v19, name=ecstatic_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:42:58 np0005475493 podman[72794]: 2025-10-08 09:42:58.882018889 +0000 UTC m=+0.102994546 container start aab2ed647a99cb2734b0655bcb5aafe0f91af0ebf8083d277d02d93aa3b66ff8 (image=quay.io/ceph/ceph:v19, name=ecstatic_nightingale, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:42:58 np0005475493 podman[72794]: 2025-10-08 09:42:58.885300378 +0000 UTC m=+0.106276035 container attach aab2ed647a99cb2734b0655bcb5aafe0f91af0ebf8083d277d02d93aa3b66ff8 (image=quay.io/ceph/ceph:v19, name=ecstatic_nightingale, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  8 05:42:58 np0005475493 podman[72794]: 2025-10-08 09:42:58.8036806 +0000 UTC m=+0.024656257 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:42:58 np0005475493 ecstatic_nightingale[72810]: AQAiMuZoCGjGNhAAl17StKeL5XF07Jmf5tnDBw==
Oct  8 05:42:58 np0005475493 systemd[1]: libpod-aab2ed647a99cb2734b0655bcb5aafe0f91af0ebf8083d277d02d93aa3b66ff8.scope: Deactivated successfully.
Oct  8 05:42:58 np0005475493 podman[72794]: 2025-10-08 09:42:58.924990847 +0000 UTC m=+0.145966504 container died aab2ed647a99cb2734b0655bcb5aafe0f91af0ebf8083d277d02d93aa3b66ff8 (image=quay.io/ceph/ceph:v19, name=ecstatic_nightingale, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:42:58 np0005475493 podman[72794]: 2025-10-08 09:42:58.961414288 +0000 UTC m=+0.182389945 container remove aab2ed647a99cb2734b0655bcb5aafe0f91af0ebf8083d277d02d93aa3b66ff8 (image=quay.io/ceph/ceph:v19, name=ecstatic_nightingale, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  8 05:42:58 np0005475493 systemd[1]: libpod-conmon-aab2ed647a99cb2734b0655bcb5aafe0f91af0ebf8083d277d02d93aa3b66ff8.scope: Deactivated successfully.
Oct  8 05:42:59 np0005475493 podman[72830]: 2025-10-08 09:42:59.015428943 +0000 UTC m=+0.034924649 container create 4aef8964cc850fe567f544f71dba581b24c6b68cf804cf83383076a87068829a (image=quay.io/ceph/ceph:v19, name=beautiful_hoover, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct  8 05:42:59 np0005475493 systemd[1]: Started libpod-conmon-4aef8964cc850fe567f544f71dba581b24c6b68cf804cf83383076a87068829a.scope.
Oct  8 05:42:59 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:42:59 np0005475493 podman[72830]: 2025-10-08 09:42:59.070503517 +0000 UTC m=+0.089999263 container init 4aef8964cc850fe567f544f71dba581b24c6b68cf804cf83383076a87068829a (image=quay.io/ceph/ceph:v19, name=beautiful_hoover, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct  8 05:42:59 np0005475493 podman[72830]: 2025-10-08 09:42:59.074759764 +0000 UTC m=+0.094255470 container start 4aef8964cc850fe567f544f71dba581b24c6b68cf804cf83383076a87068829a (image=quay.io/ceph/ceph:v19, name=beautiful_hoover, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  8 05:42:59 np0005475493 podman[72830]: 2025-10-08 09:42:59.077805861 +0000 UTC m=+0.097301597 container attach 4aef8964cc850fe567f544f71dba581b24c6b68cf804cf83383076a87068829a (image=quay.io/ceph/ceph:v19, name=beautiful_hoover, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  8 05:42:59 np0005475493 beautiful_hoover[72847]: AQAjMuZoqAN/BRAAiOuukMBorzTKYoEIuS0Nfw==
Oct  8 05:42:59 np0005475493 systemd[1]: libpod-4aef8964cc850fe567f544f71dba581b24c6b68cf804cf83383076a87068829a.scope: Deactivated successfully.
Oct  8 05:42:59 np0005475493 podman[72830]: 2025-10-08 09:42:59.095362656 +0000 UTC m=+0.114858402 container died 4aef8964cc850fe567f544f71dba581b24c6b68cf804cf83383076a87068829a (image=quay.io/ceph/ceph:v19, name=beautiful_hoover, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  8 05:42:59 np0005475493 podman[72830]: 2025-10-08 09:42:58.999491342 +0000 UTC m=+0.018987058 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:42:59 np0005475493 podman[72830]: 2025-10-08 09:42:59.128229864 +0000 UTC m=+0.147725560 container remove 4aef8964cc850fe567f544f71dba581b24c6b68cf804cf83383076a87068829a (image=quay.io/ceph/ceph:v19, name=beautiful_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Oct  8 05:42:59 np0005475493 systemd[1]: var-lib-containers-storage-overlay-71b422a3cd9070af5308dfa82e40ea0d30f2b4b22ef920027485dc0405fea92a-merged.mount: Deactivated successfully.
Oct  8 05:42:59 np0005475493 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  8 05:42:59 np0005475493 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  8 05:42:59 np0005475493 systemd[1]: libpod-conmon-4aef8964cc850fe567f544f71dba581b24c6b68cf804cf83383076a87068829a.scope: Deactivated successfully.
Oct  8 05:42:59 np0005475493 podman[72866]: 2025-10-08 09:42:59.203017173 +0000 UTC m=+0.046128177 container create 6305830abf507c32f15dfc2310afe7ffb05a28ac41705925ed6d65b5f403ec5e (image=quay.io/ceph/ceph:v19, name=strange_banzai, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:42:59 np0005475493 systemd[1]: Started libpod-conmon-6305830abf507c32f15dfc2310afe7ffb05a28ac41705925ed6d65b5f403ec5e.scope.
Oct  8 05:42:59 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:42:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e056b4a140eab56a5cf3055d1419092da7ed0474d222cce18723625da2099328/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct  8 05:42:59 np0005475493 podman[72866]: 2025-10-08 09:42:59.26871585 +0000 UTC m=+0.111826864 container init 6305830abf507c32f15dfc2310afe7ffb05a28ac41705925ed6d65b5f403ec5e (image=quay.io/ceph/ceph:v19, name=strange_banzai, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:42:59 np0005475493 podman[72866]: 2025-10-08 09:42:59.274500731 +0000 UTC m=+0.117611745 container start 6305830abf507c32f15dfc2310afe7ffb05a28ac41705925ed6d65b5f403ec5e (image=quay.io/ceph/ceph:v19, name=strange_banzai, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  8 05:42:59 np0005475493 podman[72866]: 2025-10-08 09:42:59.18119718 +0000 UTC m=+0.024308204 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:42:59 np0005475493 podman[72866]: 2025-10-08 09:42:59.280255631 +0000 UTC m=+0.123366645 container attach 6305830abf507c32f15dfc2310afe7ffb05a28ac41705925ed6d65b5f403ec5e (image=quay.io/ceph/ceph:v19, name=strange_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:42:59 np0005475493 strange_banzai[72882]: /usr/bin/monmaptool: monmap file /tmp/monmap
Oct  8 05:42:59 np0005475493 strange_banzai[72882]: setting min_mon_release = quincy
Oct  8 05:42:59 np0005475493 strange_banzai[72882]: /usr/bin/monmaptool: set fsid to 787292cc-8154-50c4-9e00-e9be3e817149
Oct  8 05:42:59 np0005475493 strange_banzai[72882]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Oct  8 05:42:59 np0005475493 systemd[1]: libpod-6305830abf507c32f15dfc2310afe7ffb05a28ac41705925ed6d65b5f403ec5e.scope: Deactivated successfully.
Oct  8 05:42:59 np0005475493 podman[72891]: 2025-10-08 09:42:59.349780073 +0000 UTC m=+0.025387535 container died 6305830abf507c32f15dfc2310afe7ffb05a28ac41705925ed6d65b5f403ec5e (image=quay.io/ceph/ceph:v19, name=strange_banzai, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  8 05:42:59 np0005475493 podman[72891]: 2025-10-08 09:42:59.390142958 +0000 UTC m=+0.065750390 container remove 6305830abf507c32f15dfc2310afe7ffb05a28ac41705925ed6d65b5f403ec5e (image=quay.io/ceph/ceph:v19, name=strange_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:42:59 np0005475493 systemd[1]: libpod-conmon-6305830abf507c32f15dfc2310afe7ffb05a28ac41705925ed6d65b5f403ec5e.scope: Deactivated successfully.
Oct  8 05:42:59 np0005475493 podman[72906]: 2025-10-08 09:42:59.479794866 +0000 UTC m=+0.056857911 container create 8ec66c23a7dc497ab81a3de2170cefb057b0e54b61ba374d4cc9b578400b6d0a (image=quay.io/ceph/ceph:v19, name=crazy_wright, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:42:59 np0005475493 systemd[1]: Started libpod-conmon-8ec66c23a7dc497ab81a3de2170cefb057b0e54b61ba374d4cc9b578400b6d0a.scope.
Oct  8 05:42:59 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:42:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb4530cd552b1f440029dcc745ae2021e6a35947aef3397a35656b1151804e35/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:42:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb4530cd552b1f440029dcc745ae2021e6a35947aef3397a35656b1151804e35/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct  8 05:42:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb4530cd552b1f440029dcc745ae2021e6a35947aef3397a35656b1151804e35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:42:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb4530cd552b1f440029dcc745ae2021e6a35947aef3397a35656b1151804e35/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  8 05:42:59 np0005475493 podman[72906]: 2025-10-08 09:42:59.536519754 +0000 UTC m=+0.113582789 container init 8ec66c23a7dc497ab81a3de2170cefb057b0e54b61ba374d4cc9b578400b6d0a (image=quay.io/ceph/ceph:v19, name=crazy_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:42:59 np0005475493 podman[72906]: 2025-10-08 09:42:59.544103412 +0000 UTC m=+0.121166447 container start 8ec66c23a7dc497ab81a3de2170cefb057b0e54b61ba374d4cc9b578400b6d0a (image=quay.io/ceph/ceph:v19, name=crazy_wright, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:42:59 np0005475493 podman[72906]: 2025-10-08 09:42:59.546994347 +0000 UTC m=+0.124057382 container attach 8ec66c23a7dc497ab81a3de2170cefb057b0e54b61ba374d4cc9b578400b6d0a (image=quay.io/ceph/ceph:v19, name=crazy_wright, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  8 05:42:59 np0005475493 podman[72906]: 2025-10-08 09:42:59.458777421 +0000 UTC m=+0.035840476 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:42:59 np0005475493 systemd[1]: libpod-8ec66c23a7dc497ab81a3de2170cefb057b0e54b61ba374d4cc9b578400b6d0a.scope: Deactivated successfully.
Oct  8 05:42:59 np0005475493 podman[72906]: 2025-10-08 09:42:59.62008814 +0000 UTC m=+0.197151205 container died 8ec66c23a7dc497ab81a3de2170cefb057b0e54b61ba374d4cc9b578400b6d0a (image=quay.io/ceph/ceph:v19, name=crazy_wright, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:42:59 np0005475493 podman[72906]: 2025-10-08 09:42:59.661634055 +0000 UTC m=+0.238697130 container remove 8ec66c23a7dc497ab81a3de2170cefb057b0e54b61ba374d4cc9b578400b6d0a (image=quay.io/ceph/ceph:v19, name=crazy_wright, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:42:59 np0005475493 systemd[1]: libpod-conmon-8ec66c23a7dc497ab81a3de2170cefb057b0e54b61ba374d4cc9b578400b6d0a.scope: Deactivated successfully.
Oct  8 05:42:59 np0005475493 systemd[1]: Reloading.
Oct  8 05:42:59 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:42:59 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:42:59 np0005475493 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  8 05:42:59 np0005475493 systemd[1]: Reloading.
Oct  8 05:43:00 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:43:00 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:43:00 np0005475493 systemd[1]: Reached target All Ceph clusters and services.
Oct  8 05:43:00 np0005475493 systemd[1]: Reloading.
Oct  8 05:43:00 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:43:00 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:43:00 np0005475493 systemd[1]: Reached target Ceph cluster 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:43:00 np0005475493 systemd[1]: Reloading.
Oct  8 05:43:00 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:43:00 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:43:00 np0005475493 systemd[1]: Reloading.
Oct  8 05:43:00 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:43:00 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:43:00 np0005475493 systemd[1]: Created slice Slice /system/ceph-787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:43:00 np0005475493 systemd[1]: Reached target System Time Set.
Oct  8 05:43:00 np0005475493 systemd[1]: Reached target System Time Synchronized.
Oct  8 05:43:00 np0005475493 systemd[1]: Starting Ceph mon.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:43:01 np0005475493 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  8 05:43:01 np0005475493 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  8 05:43:01 np0005475493 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  8 05:43:01 np0005475493 podman[73199]: 2025-10-08 09:43:01.209619988 +0000 UTC m=+0.047670930 container create 16f7f2abb5b7341e8d2841d5660a720c26117b197edd740905798f48f745f13a (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  8 05:43:01 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e9e664ae770a6a6c207293af8864848a9604e8018ff194db54d90f9d426d719/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:01 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e9e664ae770a6a6c207293af8864848a9604e8018ff194db54d90f9d426d719/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:01 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e9e664ae770a6a6c207293af8864848a9604e8018ff194db54d90f9d426d719/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:01 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e9e664ae770a6a6c207293af8864848a9604e8018ff194db54d90f9d426d719/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:01 np0005475493 podman[73199]: 2025-10-08 09:43:01.274847921 +0000 UTC m=+0.112898823 container init 16f7f2abb5b7341e8d2841d5660a720c26117b197edd740905798f48f745f13a (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  8 05:43:01 np0005475493 podman[73199]: 2025-10-08 09:43:01.185170163 +0000 UTC m=+0.023221145 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:01 np0005475493 podman[73199]: 2025-10-08 09:43:01.283691779 +0000 UTC m=+0.121742681 container start 16f7f2abb5b7341e8d2841d5660a720c26117b197edd740905798f48f745f13a (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True)
Oct  8 05:43:01 np0005475493 bash[73199]: 16f7f2abb5b7341e8d2841d5660a720c26117b197edd740905798f48f745f13a
Oct  8 05:43:01 np0005475493 systemd[1]: Started Ceph mon.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: set uid:gid to 167:167 (ceph:ceph)
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: pidfile_write: ignore empty --pid-file
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: load: jerasure load: lrc 
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: RocksDB version: 7.9.2
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: Git sha 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: Compile date 2025-07-17 03:12:14
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: DB SUMMARY
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: DB Session ID:  I5X2GQVJKNE8052F5XL5
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: CURRENT file:  CURRENT
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: IDENTITY file:  IDENTITY
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                         Options.error_if_exists: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                       Options.create_if_missing: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                         Options.paranoid_checks: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                                     Options.env: 0x56400a51ec20
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                                      Options.fs: PosixFileSystem
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                                Options.info_log: 0x56400c1d2d60
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                Options.max_file_opening_threads: 16
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                              Options.statistics: (nil)
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                               Options.use_fsync: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                       Options.max_log_file_size: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                         Options.allow_fallocate: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                        Options.use_direct_reads: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:          Options.create_missing_column_families: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                              Options.db_log_dir: 
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                                 Options.wal_dir: 
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                   Options.advise_random_on_open: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                    Options.write_buffer_manager: 0x56400c1d7900
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                            Options.rate_limiter: (nil)
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                  Options.unordered_write: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                               Options.row_cache: None
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                              Options.wal_filter: None
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:             Options.allow_ingest_behind: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:             Options.two_write_queues: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:             Options.manual_wal_flush: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:             Options.wal_compression: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:             Options.atomic_flush: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                 Options.log_readahead_size: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:             Options.allow_data_in_errors: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:             Options.db_host_id: __hostname__
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:             Options.max_background_jobs: 2
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:             Options.max_background_compactions: -1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:             Options.max_subcompactions: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:             Options.max_total_wal_size: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                          Options.max_open_files: -1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                          Options.bytes_per_sync: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:       Options.compaction_readahead_size: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                  Options.max_background_flushes: -1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: Compression algorithms supported:
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: #011kZSTD supported: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: #011kXpressCompression supported: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: #011kBZip2Compression supported: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: #011kLZ4Compression supported: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: #011kZlibCompression supported: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: #011kSnappyCompression supported: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:           Options.merge_operator: 
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:        Options.compaction_filter: None
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:        Options.compaction_filter_factory: None
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:  Options.sst_partitioner_factory: None
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56400c1d2500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56400c1f7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:        Options.write_buffer_size: 33554432
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:  Options.max_write_buffer_number: 2
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:          Options.compression: NoCompression
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:       Options.prefix_extractor: nullptr
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:             Options.num_levels: 7
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                  Options.compression_opts.level: 32767
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:               Options.compression_opts.strategy: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                  Options.compression_opts.enabled: false
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                        Options.arena_block_size: 1048576
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                Options.disable_auto_compactions: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                   Options.inplace_update_support: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                           Options.bloom_locality: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                    Options.max_successive_merges: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                Options.paranoid_file_checks: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                Options.force_consistency_checks: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                Options.report_bg_io_stats: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                               Options.ttl: 2592000
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                       Options.enable_blob_files: false
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                           Options.min_blob_size: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                          Options.blob_file_size: 268435456
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb:                Options.blob_file_starting_level: 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5fe81d9b-468a-4413-adf1-4e4bd83383d4
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916581342080, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916581343992, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "I5X2GQVJKNE8052F5XL5", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916581344104, "job": 1, "event": "recovery_finished"}
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56400c1f8e00
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: DB pointer 0x56400c302000
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.17 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.17 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56400c1f7350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)#012#012** File Read Latency Histogram By Level [default] **
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@-1(???) e0 preinit fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(probing) e0 win_standalone_election
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct  8 05:43:01 np0005475493 podman[73219]: 2025-10-08 09:43:01.367336364 +0000 UTC m=+0.045520310 container create fd5addef9437bd5c5454e70e6a2475fb0941706ea8c2f96802ce0e2035453e98 (image=quay.io/ceph/ceph:v19, name=affectionate_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(probing) e1 win_standalone_election
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: paxos.0).electionLogic(2) init, last seen epoch 2
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: log_channel(cluster) log [DBG] : monmap epoch 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: log_channel(cluster) log [DBG] : fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: log_channel(cluster) log [DBG] : last_changed 2025-10-08T09:42:59.307631+0000
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: log_channel(cluster) log [DBG] : created 2025-10-08T09:42:59.307631+0000
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864104,os=Linux}
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader).mds e1 new map
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2025-10-08T09:43:01:374245+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: log_channel(cluster) log [DBG] : fsmap 
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mkfs 787292cc-8154-50c4-9e00-e9be3e817149
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  8 05:43:01 np0005475493 systemd[1]: Started libpod-conmon-fd5addef9437bd5c5454e70e6a2475fb0941706ea8c2f96802ce0e2035453e98.scope.
Oct  8 05:43:01 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:01 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c487c7a8e36e6c42bf640cc52a3fa0f29dd300a992c105216206a7ad48d04f0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:01 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c487c7a8e36e6c42bf640cc52a3fa0f29dd300a992c105216206a7ad48d04f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:01 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c487c7a8e36e6c42bf640cc52a3fa0f29dd300a992c105216206a7ad48d04f0/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:01 np0005475493 podman[73219]: 2025-10-08 09:43:01.350835799 +0000 UTC m=+0.029019775 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:01 np0005475493 podman[73219]: 2025-10-08 09:43:01.446577162 +0000 UTC m=+0.124761138 container init fd5addef9437bd5c5454e70e6a2475fb0941706ea8c2f96802ce0e2035453e98 (image=quay.io/ceph/ceph:v19, name=affectionate_thompson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  8 05:43:01 np0005475493 podman[73219]: 2025-10-08 09:43:01.453506233 +0000 UTC m=+0.131690209 container start fd5addef9437bd5c5454e70e6a2475fb0941706ea8c2f96802ce0e2035453e98 (image=quay.io/ceph/ceph:v19, name=affectionate_thompson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  8 05:43:01 np0005475493 podman[73219]: 2025-10-08 09:43:01.457287596 +0000 UTC m=+0.135471572 container attach fd5addef9437bd5c5454e70e6a2475fb0941706ea8c2f96802ce0e2035453e98 (image=quay.io/ceph/ceph:v19, name=affectionate_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Oct  8 05:43:01 np0005475493 ceph-mon[73218]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3330539534' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  8 05:43:01 np0005475493 affectionate_thompson[73274]:  cluster:
Oct  8 05:43:01 np0005475493 affectionate_thompson[73274]:    id:     787292cc-8154-50c4-9e00-e9be3e817149
Oct  8 05:43:01 np0005475493 affectionate_thompson[73274]:    health: HEALTH_OK
Oct  8 05:43:01 np0005475493 affectionate_thompson[73274]: 
Oct  8 05:43:01 np0005475493 affectionate_thompson[73274]:  services:
Oct  8 05:43:01 np0005475493 affectionate_thompson[73274]:    mon: 1 daemons, quorum compute-0 (age 0.251755s)
Oct  8 05:43:01 np0005475493 affectionate_thompson[73274]:    mgr: no daemons active
Oct  8 05:43:01 np0005475493 affectionate_thompson[73274]:    osd: 0 osds: 0 up, 0 in
Oct  8 05:43:01 np0005475493 affectionate_thompson[73274]: 
Oct  8 05:43:01 np0005475493 affectionate_thompson[73274]:  data:
Oct  8 05:43:01 np0005475493 affectionate_thompson[73274]:    pools:   0 pools, 0 pgs
Oct  8 05:43:01 np0005475493 affectionate_thompson[73274]:    objects: 0 objects, 0 B
Oct  8 05:43:01 np0005475493 affectionate_thompson[73274]:    usage:   0 B used, 0 B / 0 B avail
Oct  8 05:43:01 np0005475493 affectionate_thompson[73274]:    pgs:     
Oct  8 05:43:01 np0005475493 affectionate_thompson[73274]: 
Oct  8 05:43:01 np0005475493 systemd[1]: libpod-fd5addef9437bd5c5454e70e6a2475fb0941706ea8c2f96802ce0e2035453e98.scope: Deactivated successfully.
Oct  8 05:43:01 np0005475493 podman[73219]: 2025-10-08 09:43:01.641515187 +0000 UTC m=+0.319699133 container died fd5addef9437bd5c5454e70e6a2475fb0941706ea8c2f96802ce0e2035453e98 (image=quay.io/ceph/ceph:v19, name=affectionate_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:43:01 np0005475493 podman[73219]: 2025-10-08 09:43:01.675631946 +0000 UTC m=+0.353815892 container remove fd5addef9437bd5c5454e70e6a2475fb0941706ea8c2f96802ce0e2035453e98 (image=quay.io/ceph/ceph:v19, name=affectionate_thompson, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:43:01 np0005475493 systemd[1]: libpod-conmon-fd5addef9437bd5c5454e70e6a2475fb0941706ea8c2f96802ce0e2035453e98.scope: Deactivated successfully.
Oct  8 05:43:01 np0005475493 podman[73313]: 2025-10-08 09:43:01.731120915 +0000 UTC m=+0.036453652 container create d5a6c53cbdca50f23e26c85bdc3dfeeea93453795db7644c7fcd0ba392691279 (image=quay.io/ceph/ceph:v19, name=romantic_dubinsky, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:01 np0005475493 systemd[1]: Started libpod-conmon-d5a6c53cbdca50f23e26c85bdc3dfeeea93453795db7644c7fcd0ba392691279.scope.
Oct  8 05:43:01 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:01 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b546311652d4dd558dc6afee95185321c437efbcfb69cd6287ca933f7e49bf4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:01 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b546311652d4dd558dc6afee95185321c437efbcfb69cd6287ca933f7e49bf4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:01 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b546311652d4dd558dc6afee95185321c437efbcfb69cd6287ca933f7e49bf4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:01 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b546311652d4dd558dc6afee95185321c437efbcfb69cd6287ca933f7e49bf4/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:01 np0005475493 podman[73313]: 2025-10-08 09:43:01.715972831 +0000 UTC m=+0.021305578 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:01 np0005475493 podman[73313]: 2025-10-08 09:43:01.816356754 +0000 UTC m=+0.121689581 container init d5a6c53cbdca50f23e26c85bdc3dfeeea93453795db7644c7fcd0ba392691279 (image=quay.io/ceph/ceph:v19, name=romantic_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:01 np0005475493 podman[73313]: 2025-10-08 09:43:01.828978475 +0000 UTC m=+0.134311202 container start d5a6c53cbdca50f23e26c85bdc3dfeeea93453795db7644c7fcd0ba392691279 (image=quay.io/ceph/ceph:v19, name=romantic_dubinsky, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:43:01 np0005475493 podman[73313]: 2025-10-08 09:43:01.833547015 +0000 UTC m=+0.138879742 container attach d5a6c53cbdca50f23e26c85bdc3dfeeea93453795db7644c7fcd0ba392691279 (image=quay.io/ceph/ceph:v19, name=romantic_dubinsky, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct  8 05:43:02 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Oct  8 05:43:02 np0005475493 ceph-mon[73218]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4199698163' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  8 05:43:02 np0005475493 ceph-mon[73218]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4199698163' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  8 05:43:02 np0005475493 romantic_dubinsky[73329]: 
Oct  8 05:43:02 np0005475493 romantic_dubinsky[73329]: [global]
Oct  8 05:43:02 np0005475493 romantic_dubinsky[73329]: #011fsid = 787292cc-8154-50c4-9e00-e9be3e817149
Oct  8 05:43:02 np0005475493 romantic_dubinsky[73329]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct  8 05:43:02 np0005475493 systemd[1]: libpod-d5a6c53cbdca50f23e26c85bdc3dfeeea93453795db7644c7fcd0ba392691279.scope: Deactivated successfully.
Oct  8 05:43:02 np0005475493 podman[73313]: 2025-10-08 09:43:02.024849277 +0000 UTC m=+0.330182004 container died d5a6c53cbdca50f23e26c85bdc3dfeeea93453795db7644c7fcd0ba392691279 (image=quay.io/ceph/ceph:v19, name=romantic_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Oct  8 05:43:02 np0005475493 podman[73313]: 2025-10-08 09:43:02.056770708 +0000 UTC m=+0.362103435 container remove d5a6c53cbdca50f23e26c85bdc3dfeeea93453795db7644c7fcd0ba392691279 (image=quay.io/ceph/ceph:v19, name=romantic_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:43:02 np0005475493 systemd[1]: libpod-conmon-d5a6c53cbdca50f23e26c85bdc3dfeeea93453795db7644c7fcd0ba392691279.scope: Deactivated successfully.
Oct  8 05:43:02 np0005475493 podman[73366]: 2025-10-08 09:43:02.143540451 +0000 UTC m=+0.055282637 container create 4136bc89e78b419a7d8d9a0963325ba86e565e5c4314e42d33f12faadefc3ddb (image=quay.io/ceph/ceph:v19, name=frosty_panini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  8 05:43:02 np0005475493 systemd[1]: Started libpod-conmon-4136bc89e78b419a7d8d9a0963325ba86e565e5c4314e42d33f12faadefc3ddb.scope.
Oct  8 05:43:02 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:02 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783aec4a062ac05ee08cf550688b80fa5cf5f0bd0eb18541208a410d7be029d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:02 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783aec4a062ac05ee08cf550688b80fa5cf5f0bd0eb18541208a410d7be029d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:02 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783aec4a062ac05ee08cf550688b80fa5cf5f0bd0eb18541208a410d7be029d5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:02 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783aec4a062ac05ee08cf550688b80fa5cf5f0bd0eb18541208a410d7be029d5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:02 np0005475493 podman[73366]: 2025-10-08 09:43:02.211416488 +0000 UTC m=+0.123158674 container init 4136bc89e78b419a7d8d9a0963325ba86e565e5c4314e42d33f12faadefc3ddb (image=quay.io/ceph/ceph:v19, name=frosty_panini, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:43:02 np0005475493 podman[73366]: 2025-10-08 09:43:02.219115275 +0000 UTC m=+0.130857451 container start 4136bc89e78b419a7d8d9a0963325ba86e565e5c4314e42d33f12faadefc3ddb (image=quay.io/ceph/ceph:v19, name=frosty_panini, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  8 05:43:02 np0005475493 podman[73366]: 2025-10-08 09:43:02.124632515 +0000 UTC m=+0.036374701 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:02 np0005475493 podman[73366]: 2025-10-08 09:43:02.222292293 +0000 UTC m=+0.134034499 container attach 4136bc89e78b419a7d8d9a0963325ba86e565e5c4314e42d33f12faadefc3ddb (image=quay.io/ceph/ceph:v19, name=frosty_panini, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:43:02 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:43:02 np0005475493 ceph-mon[73218]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3472823470' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:43:02 np0005475493 ceph-mon[73218]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  8 05:43:02 np0005475493 ceph-mon[73218]: from='client.? 192.168.122.100:0/4199698163' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  8 05:43:02 np0005475493 ceph-mon[73218]: from='client.? 192.168.122.100:0/4199698163' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  8 05:43:02 np0005475493 systemd[1]: libpod-4136bc89e78b419a7d8d9a0963325ba86e565e5c4314e42d33f12faadefc3ddb.scope: Deactivated successfully.
Oct  8 05:43:02 np0005475493 podman[73408]: 2025-10-08 09:43:02.445369745 +0000 UTC m=+0.022943132 container died 4136bc89e78b419a7d8d9a0963325ba86e565e5c4314e42d33f12faadefc3ddb (image=quay.io/ceph/ceph:v19, name=frosty_panini, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:43:02 np0005475493 systemd[1]: var-lib-containers-storage-overlay-783aec4a062ac05ee08cf550688b80fa5cf5f0bd0eb18541208a410d7be029d5-merged.mount: Deactivated successfully.
Oct  8 05:43:02 np0005475493 podman[73408]: 2025-10-08 09:43:02.480553075 +0000 UTC m=+0.058126452 container remove 4136bc89e78b419a7d8d9a0963325ba86e565e5c4314e42d33f12faadefc3ddb (image=quay.io/ceph/ceph:v19, name=frosty_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  8 05:43:02 np0005475493 systemd[1]: libpod-conmon-4136bc89e78b419a7d8d9a0963325ba86e565e5c4314e42d33f12faadefc3ddb.scope: Deactivated successfully.
Oct  8 05:43:02 np0005475493 systemd[1]: Stopping Ceph mon.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:43:02 np0005475493 ceph-mon[73218]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct  8 05:43:02 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct  8 05:43:02 np0005475493 ceph-mon[73218]: mon.compute-0@0(leader) e1 shutdown
Oct  8 05:43:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0[73214]: 2025-10-08T09:43:02.669+0000 7fd7b6c5e640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct  8 05:43:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0[73214]: 2025-10-08T09:43:02.669+0000 7fd7b6c5e640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct  8 05:43:02 np0005475493 ceph-mon[73218]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  8 05:43:02 np0005475493 ceph-mon[73218]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  8 05:43:02 np0005475493 podman[73452]: 2025-10-08 09:43:02.812579594 +0000 UTC m=+0.179241877 container died 16f7f2abb5b7341e8d2841d5660a720c26117b197edd740905798f48f745f13a (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  8 05:43:02 np0005475493 systemd[1]: var-lib-containers-storage-overlay-6e9e664ae770a6a6c207293af8864848a9604e8018ff194db54d90f9d426d719-merged.mount: Deactivated successfully.
Oct  8 05:43:02 np0005475493 podman[73452]: 2025-10-08 09:43:02.846860326 +0000 UTC m=+0.213522569 container remove 16f7f2abb5b7341e8d2841d5660a720c26117b197edd740905798f48f745f13a (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:02 np0005475493 bash[73452]: ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0
Oct  8 05:43:02 np0005475493 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  8 05:43:02 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@mon.compute-0.service: Deactivated successfully.
Oct  8 05:43:02 np0005475493 systemd[1]: Stopped Ceph mon.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:43:02 np0005475493 systemd[1]: Starting Ceph mon.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:43:03 np0005475493 podman[73555]: 2025-10-08 09:43:03.171574821 +0000 UTC m=+0.033361244 container create 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:43:03 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54a1a03dbd7c508b561d3245ac013ec6a83c9d541d6eb822d9e74ba9d9f78e4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:03 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54a1a03dbd7c508b561d3245ac013ec6a83c9d541d6eb822d9e74ba9d9f78e4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:03 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54a1a03dbd7c508b561d3245ac013ec6a83c9d541d6eb822d9e74ba9d9f78e4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:03 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54a1a03dbd7c508b561d3245ac013ec6a83c9d541d6eb822d9e74ba9d9f78e4c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:03 np0005475493 podman[73555]: 2025-10-08 09:43:03.232432966 +0000 UTC m=+0.094219469 container init 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct  8 05:43:03 np0005475493 podman[73555]: 2025-10-08 09:43:03.237600881 +0000 UTC m=+0.099387344 container start 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  8 05:43:03 np0005475493 bash[73555]: 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d
Oct  8 05:43:03 np0005475493 podman[73555]: 2025-10-08 09:43:03.157924661 +0000 UTC m=+0.019711114 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:03 np0005475493 systemd[1]: Started Ceph mon.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: set uid:gid to 167:167 (ceph:ceph)
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: pidfile_write: ignore empty --pid-file
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: load: jerasure load: lrc 
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: RocksDB version: 7.9.2
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: Git sha 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: Compile date 2025-07-17 03:12:14
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: DB SUMMARY
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: DB Session ID:  KN4HYS7VUCE6V85JIQOU
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: CURRENT file:  CURRENT
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: IDENTITY file:  IDENTITY
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 59859 ; 
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                         Options.error_if_exists: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                       Options.create_if_missing: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                         Options.paranoid_checks: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                                     Options.env: 0x55f7a0c9cc20
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                                      Options.fs: PosixFileSystem
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                                Options.info_log: 0x55f7a1cbfac0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                Options.max_file_opening_threads: 16
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                              Options.statistics: (nil)
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                               Options.use_fsync: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                       Options.max_log_file_size: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                         Options.allow_fallocate: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                        Options.use_direct_reads: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:          Options.create_missing_column_families: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                              Options.db_log_dir: 
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                                 Options.wal_dir: 
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                   Options.advise_random_on_open: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                    Options.write_buffer_manager: 0x55f7a1cc3900
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                            Options.rate_limiter: (nil)
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                  Options.unordered_write: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                               Options.row_cache: None
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                              Options.wal_filter: None
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:             Options.allow_ingest_behind: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:             Options.two_write_queues: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:             Options.manual_wal_flush: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:             Options.wal_compression: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:             Options.atomic_flush: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                 Options.log_readahead_size: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:             Options.allow_data_in_errors: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:             Options.db_host_id: __hostname__
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:             Options.max_background_jobs: 2
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:             Options.max_background_compactions: -1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:             Options.max_subcompactions: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:             Options.max_total_wal_size: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                          Options.max_open_files: -1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                          Options.bytes_per_sync: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:       Options.compaction_readahead_size: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                  Options.max_background_flushes: -1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: Compression algorithms supported:
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: #011kZSTD supported: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: #011kXpressCompression supported: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: #011kBZip2Compression supported: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: #011kLZ4Compression supported: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: #011kZlibCompression supported: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: #011kSnappyCompression supported: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:           Options.merge_operator: 
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:        Options.compaction_filter: None
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:        Options.compaction_filter_factory: None
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:  Options.sst_partitioner_factory: None
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f7a1cbeaa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f7a1ce3350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:        Options.write_buffer_size: 33554432
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:  Options.max_write_buffer_number: 2
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:          Options.compression: NoCompression
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:       Options.prefix_extractor: nullptr
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:             Options.num_levels: 7
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                  Options.compression_opts.level: 32767
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:               Options.compression_opts.strategy: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                  Options.compression_opts.enabled: false
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                        Options.arena_block_size: 1048576
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                Options.disable_auto_compactions: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                   Options.inplace_update_support: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                           Options.bloom_locality: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                    Options.max_successive_merges: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                Options.paranoid_file_checks: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                Options.force_consistency_checks: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                Options.report_bg_io_stats: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                               Options.ttl: 2592000
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                       Options.enable_blob_files: false
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                           Options.min_blob_size: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                          Options.blob_file_size: 268435456
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb:                Options.blob_file_starting_level: 0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5fe81d9b-468a-4413-adf1-4e4bd83383d4
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916583279999, "job": 1, "event": "recovery_started", "wal_files": [9]}
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916583283778, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59627, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 138, "table_properties": {"data_size": 58095, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3209, "raw_average_key_size": 30, "raw_value_size": 55578, "raw_average_value_size": 529, "num_data_blocks": 9, "num_entries": 105, "num_filter_entries": 105, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916583, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916583283899, "job": 1, "event": "recovery_finished"}
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f7a1ce4e00
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: DB pointer 0x55f7a1dee000
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   60.13 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     16.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0   60.13 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     16.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     16.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     16.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 4.28 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 4.28 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f7a1ce3350#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: mon.compute-0@-1(???) e1 preinit fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: mon.compute-0@-1(???).mds e1 new map
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2025-10-08T09:43:01:374245+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(probing) e1 win_standalone_election
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : monmap epoch 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : last_changed 2025-10-08T09:42:59.307631+0000
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : created 2025-10-08T09:42:59.307631+0000
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsmap 
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct  8 05:43:03 np0005475493 podman[73573]: 2025-10-08 09:43:03.309401013 +0000 UTC m=+0.038346228 container create 44b77240aa3f3814333391940cc24d386762abd2275d00a9cb15b3413c0269db (image=quay.io/ceph/ceph:v19, name=amazing_thompson, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:43:03 np0005475493 systemd[1]: Started libpod-conmon-44b77240aa3f3814333391940cc24d386762abd2275d00a9cb15b3413c0269db.scope.
Oct  8 05:43:03 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  8 05:43:03 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fff82d219aa41ca7f4c00ad79e92a6fa22eae5c0057b8e5ed1c6b9cd39045b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:03 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fff82d219aa41ca7f4c00ad79e92a6fa22eae5c0057b8e5ed1c6b9cd39045b8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:03 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fff82d219aa41ca7f4c00ad79e92a6fa22eae5c0057b8e5ed1c6b9cd39045b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:03 np0005475493 podman[73573]: 2025-10-08 09:43:03.373898501 +0000 UTC m=+0.102843726 container init 44b77240aa3f3814333391940cc24d386762abd2275d00a9cb15b3413c0269db (image=quay.io/ceph/ceph:v19, name=amazing_thompson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:43:03 np0005475493 podman[73573]: 2025-10-08 09:43:03.380523948 +0000 UTC m=+0.109469153 container start 44b77240aa3f3814333391940cc24d386762abd2275d00a9cb15b3413c0269db (image=quay.io/ceph/ceph:v19, name=amazing_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:43:03 np0005475493 podman[73573]: 2025-10-08 09:43:03.383312113 +0000 UTC m=+0.112257358 container attach 44b77240aa3f3814333391940cc24d386762abd2275d00a9cb15b3413c0269db (image=quay.io/ceph/ceph:v19, name=amazing_thompson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:43:03 np0005475493 podman[73573]: 2025-10-08 09:43:03.294581023 +0000 UTC m=+0.023526258 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Oct  8 05:43:03 np0005475493 systemd[1]: libpod-44b77240aa3f3814333391940cc24d386762abd2275d00a9cb15b3413c0269db.scope: Deactivated successfully.
Oct  8 05:43:03 np0005475493 podman[73573]: 2025-10-08 09:43:03.597115743 +0000 UTC m=+0.326060958 container died 44b77240aa3f3814333391940cc24d386762abd2275d00a9cb15b3413c0269db (image=quay.io/ceph/ceph:v19, name=amazing_thompson, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:43:03 np0005475493 systemd[1]: var-lib-containers-storage-overlay-6fff82d219aa41ca7f4c00ad79e92a6fa22eae5c0057b8e5ed1c6b9cd39045b8-merged.mount: Deactivated successfully.
Oct  8 05:43:03 np0005475493 podman[73573]: 2025-10-08 09:43:03.635022036 +0000 UTC m=+0.363967241 container remove 44b77240aa3f3814333391940cc24d386762abd2275d00a9cb15b3413c0269db (image=quay.io/ceph/ceph:v19, name=amazing_thompson, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:43:03 np0005475493 systemd[1]: libpod-conmon-44b77240aa3f3814333391940cc24d386762abd2275d00a9cb15b3413c0269db.scope: Deactivated successfully.
Oct  8 05:43:03 np0005475493 podman[73667]: 2025-10-08 09:43:03.686721561 +0000 UTC m=+0.035659114 container create a78d8d4e05cb7df7eb56b015eaa5d6cb76e4125de5ec332a6f8481e2fc9c4e5d (image=quay.io/ceph/ceph:v19, name=modest_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  8 05:43:03 np0005475493 systemd[1]: Started libpod-conmon-a78d8d4e05cb7df7eb56b015eaa5d6cb76e4125de5ec332a6f8481e2fc9c4e5d.scope.
Oct  8 05:43:03 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:03 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6d9f63418d672a526030b4c257ce498672a7d67c28f674ca85c89e21cd6de9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:03 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6d9f63418d672a526030b4c257ce498672a7d67c28f674ca85c89e21cd6de9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:03 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6d9f63418d672a526030b4c257ce498672a7d67c28f674ca85c89e21cd6de9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:03 np0005475493 podman[73667]: 2025-10-08 09:43:03.762975862 +0000 UTC m=+0.111913395 container init a78d8d4e05cb7df7eb56b015eaa5d6cb76e4125de5ec332a6f8481e2fc9c4e5d (image=quay.io/ceph/ceph:v19, name=modest_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct  8 05:43:03 np0005475493 podman[73667]: 2025-10-08 09:43:03.670521189 +0000 UTC m=+0.019458722 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:03 np0005475493 podman[73667]: 2025-10-08 09:43:03.774994707 +0000 UTC m=+0.123932220 container start a78d8d4e05cb7df7eb56b015eaa5d6cb76e4125de5ec332a6f8481e2fc9c4e5d (image=quay.io/ceph/ceph:v19, name=modest_dewdney, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:43:03 np0005475493 podman[73667]: 2025-10-08 09:43:03.777977854 +0000 UTC m=+0.126915387 container attach a78d8d4e05cb7df7eb56b015eaa5d6cb76e4125de5ec332a6f8481e2fc9c4e5d (image=quay.io/ceph/ceph:v19, name=modest_dewdney, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:43:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Oct  8 05:43:04 np0005475493 systemd[1]: libpod-a78d8d4e05cb7df7eb56b015eaa5d6cb76e4125de5ec332a6f8481e2fc9c4e5d.scope: Deactivated successfully.
Oct  8 05:43:04 np0005475493 podman[73667]: 2025-10-08 09:43:04.030645946 +0000 UTC m=+0.379583549 container died a78d8d4e05cb7df7eb56b015eaa5d6cb76e4125de5ec332a6f8481e2fc9c4e5d (image=quay.io/ceph/ceph:v19, name=modest_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct  8 05:43:04 np0005475493 podman[73667]: 2025-10-08 09:43:04.105853067 +0000 UTC m=+0.454790580 container remove a78d8d4e05cb7df7eb56b015eaa5d6cb76e4125de5ec332a6f8481e2fc9c4e5d (image=quay.io/ceph/ceph:v19, name=modest_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:43:04 np0005475493 systemd[1]: libpod-conmon-a78d8d4e05cb7df7eb56b015eaa5d6cb76e4125de5ec332a6f8481e2fc9c4e5d.scope: Deactivated successfully.
Oct  8 05:43:04 np0005475493 systemd[1]: Reloading.
Oct  8 05:43:04 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:43:04 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:43:04 np0005475493 systemd[1]: Reloading.
Oct  8 05:43:04 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:43:04 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:43:04 np0005475493 systemd[1]: Starting Ceph mgr.compute-0.ixicfj for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:43:04 np0005475493 podman[73849]: 2025-10-08 09:43:04.890973972 +0000 UTC m=+0.019031919 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:06 np0005475493 podman[73849]: 2025-10-08 09:43:06.236266592 +0000 UTC m=+1.364324519 container create 507427ceb1795d8f880fc9a43897ce65f2b5ce89744d49298a3fa86e2b68fb56 (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:06 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/831c2acc3a08033f74422f0cfbd7d714be37000ffed30c8cbe85077263a3a8d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:06 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/831c2acc3a08033f74422f0cfbd7d714be37000ffed30c8cbe85077263a3a8d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:06 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/831c2acc3a08033f74422f0cfbd7d714be37000ffed30c8cbe85077263a3a8d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:06 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/831c2acc3a08033f74422f0cfbd7d714be37000ffed30c8cbe85077263a3a8d4/merged/var/lib/ceph/mgr/ceph-compute-0.ixicfj supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:06 np0005475493 podman[73849]: 2025-10-08 09:43:06.318346904 +0000 UTC m=+1.446404911 container init 507427ceb1795d8f880fc9a43897ce65f2b5ce89744d49298a3fa86e2b68fb56 (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:43:06 np0005475493 podman[73849]: 2025-10-08 09:43:06.327728177 +0000 UTC m=+1.455786134 container start 507427ceb1795d8f880fc9a43897ce65f2b5ce89744d49298a3fa86e2b68fb56 (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:06 np0005475493 bash[73849]: 507427ceb1795d8f880fc9a43897ce65f2b5ce89744d49298a3fa86e2b68fb56
Oct  8 05:43:06 np0005475493 systemd[1]: Started Ceph mgr.compute-0.ixicfj for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:43:06 np0005475493 ceph-mgr[73869]: set uid:gid to 167:167 (ceph:ceph)
Oct  8 05:43:06 np0005475493 ceph-mgr[73869]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  8 05:43:06 np0005475493 ceph-mgr[73869]: pidfile_write: ignore empty --pid-file
Oct  8 05:43:06 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'alerts'
Oct  8 05:43:06 np0005475493 podman[73870]: 2025-10-08 09:43:06.435135111 +0000 UTC m=+0.053432531 container create e3524f7fb4cc3af542fb619af6a384f10e1c57e8068c7840c9439293bc2687e9 (image=quay.io/ceph/ceph:v19, name=jovial_lamarr, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct  8 05:43:06 np0005475493 systemd[1]: Started libpod-conmon-e3524f7fb4cc3af542fb619af6a384f10e1c57e8068c7840c9439293bc2687e9.scope.
Oct  8 05:43:06 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:06 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd8e9a6f6d9dc583358b00b66e93d8d17b0c1dff470c869fcff473458b242339/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:06 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd8e9a6f6d9dc583358b00b66e93d8d17b0c1dff470c869fcff473458b242339/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:06 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd8e9a6f6d9dc583358b00b66e93d8d17b0c1dff470c869fcff473458b242339/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:06 np0005475493 ceph-mgr[73869]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  8 05:43:06 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'balancer'
Oct  8 05:43:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:06.507+0000 7f971cc6d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  8 05:43:06 np0005475493 podman[73870]: 2025-10-08 09:43:06.418240432 +0000 UTC m=+0.036537862 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:06 np0005475493 podman[73870]: 2025-10-08 09:43:06.515154645 +0000 UTC m=+0.133452105 container init e3524f7fb4cc3af542fb619af6a384f10e1c57e8068c7840c9439293bc2687e9 (image=quay.io/ceph/ceph:v19, name=jovial_lamarr, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid)
Oct  8 05:43:06 np0005475493 podman[73870]: 2025-10-08 09:43:06.521156777 +0000 UTC m=+0.139454207 container start e3524f7fb4cc3af542fb619af6a384f10e1c57e8068c7840c9439293bc2687e9 (image=quay.io/ceph/ceph:v19, name=jovial_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:06 np0005475493 podman[73870]: 2025-10-08 09:43:06.52487243 +0000 UTC m=+0.143169880 container attach e3524f7fb4cc3af542fb619af6a384f10e1c57e8068c7840c9439293bc2687e9 (image=quay.io/ceph/ceph:v19, name=jovial_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  8 05:43:06 np0005475493 ceph-mgr[73869]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  8 05:43:06 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'cephadm'
Oct  8 05:43:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:06.584+0000 7f971cc6d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  8 05:43:06 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct  8 05:43:06 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4167060928' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]: 
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]: {
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:    "fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:    "health": {
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "status": "HEALTH_OK",
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "checks": {},
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "mutes": []
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:    },
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:    "election_epoch": 5,
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:    "quorum": [
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        0
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:    ],
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:    "quorum_names": [
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "compute-0"
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:    ],
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:    "quorum_age": 3,
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:    "monmap": {
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "epoch": 1,
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "min_mon_release_name": "squid",
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "num_mons": 1
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:    },
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:    "osdmap": {
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "epoch": 1,
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "num_osds": 0,
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "num_up_osds": 0,
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "osd_up_since": 0,
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "num_in_osds": 0,
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "osd_in_since": 0,
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "num_remapped_pgs": 0
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:    },
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:    "pgmap": {
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "pgs_by_state": [],
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "num_pgs": 0,
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "num_pools": 0,
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "num_objects": 0,
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "data_bytes": 0,
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "bytes_used": 0,
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "bytes_avail": 0,
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "bytes_total": 0
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:    },
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:    "fsmap": {
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "epoch": 1,
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "btime": "2025-10-08T09:43:01:374245+0000",
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "by_rank": [],
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "up:standby": 0
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:    },
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:    "mgrmap": {
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "available": false,
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "num_standbys": 0,
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "modules": [
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:            "iostat",
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:            "nfs",
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:            "restful"
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        ],
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "services": {}
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:    },
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:    "servicemap": {
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "epoch": 1,
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "modified": "2025-10-08T09:43:01.375926+0000",
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:        "services": {}
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:    },
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]:    "progress_events": {}
Oct  8 05:43:06 np0005475493 jovial_lamarr[73907]: }
Oct  8 05:43:06 np0005475493 systemd[1]: libpod-e3524f7fb4cc3af542fb619af6a384f10e1c57e8068c7840c9439293bc2687e9.scope: Deactivated successfully.
Oct  8 05:43:06 np0005475493 podman[73870]: 2025-10-08 09:43:06.765492886 +0000 UTC m=+0.383790306 container died e3524f7fb4cc3af542fb619af6a384f10e1c57e8068c7840c9439293bc2687e9 (image=quay.io/ceph/ceph:v19, name=jovial_lamarr, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct  8 05:43:06 np0005475493 systemd[1]: var-lib-containers-storage-overlay-fd8e9a6f6d9dc583358b00b66e93d8d17b0c1dff470c869fcff473458b242339-merged.mount: Deactivated successfully.
Oct  8 05:43:06 np0005475493 podman[73870]: 2025-10-08 09:43:06.808216202 +0000 UTC m=+0.426513622 container remove e3524f7fb4cc3af542fb619af6a384f10e1c57e8068c7840c9439293bc2687e9 (image=quay.io/ceph/ceph:v19, name=jovial_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:43:06 np0005475493 systemd[1]: libpod-conmon-e3524f7fb4cc3af542fb619af6a384f10e1c57e8068c7840c9439293bc2687e9.scope: Deactivated successfully.
Oct  8 05:43:07 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'crash'
Oct  8 05:43:07 np0005475493 ceph-mgr[73869]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  8 05:43:07 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'dashboard'
Oct  8 05:43:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:07.354+0000 7f971cc6d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  8 05:43:07 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'devicehealth'
Oct  8 05:43:07 np0005475493 ceph-mgr[73869]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  8 05:43:07 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'diskprediction_local'
Oct  8 05:43:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:07.969+0000 7f971cc6d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  8 05:43:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  8 05:43:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  8 05:43:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:  from numpy import show_config as show_numpy_config
Oct  8 05:43:08 np0005475493 ceph-mgr[73869]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  8 05:43:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:08.130+0000 7f971cc6d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  8 05:43:08 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'influx'
Oct  8 05:43:08 np0005475493 ceph-mgr[73869]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  8 05:43:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:08.200+0000 7f971cc6d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  8 05:43:08 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'insights'
Oct  8 05:43:08 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'iostat'
Oct  8 05:43:08 np0005475493 ceph-mgr[73869]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  8 05:43:08 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'k8sevents'
Oct  8 05:43:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:08.333+0000 7f971cc6d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  8 05:43:08 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'localpool'
Oct  8 05:43:08 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'mds_autoscaler'
Oct  8 05:43:08 np0005475493 podman[73956]: 2025-10-08 09:43:08.874086449 +0000 UTC m=+0.044813455 container create 7a0a8ac6baf2b1e3293028e4f010021006c38a024ab45361a4567ebc3fed1c6a (image=quay.io/ceph/ceph:v19, name=relaxed_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  8 05:43:08 np0005475493 systemd[1]: Started libpod-conmon-7a0a8ac6baf2b1e3293028e4f010021006c38a024ab45361a4567ebc3fed1c6a.scope.
Oct  8 05:43:08 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:08 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd866ae709c382715a1ce5c8068304f8e130c9fbaf8230a9b5b7a72eacebfbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:08 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd866ae709c382715a1ce5c8068304f8e130c9fbaf8230a9b5b7a72eacebfbc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:08 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd866ae709c382715a1ce5c8068304f8e130c9fbaf8230a9b5b7a72eacebfbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:08 np0005475493 podman[73956]: 2025-10-08 09:43:08.853556438 +0000 UTC m=+0.024283444 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:08 np0005475493 podman[73956]: 2025-10-08 09:43:08.962537256 +0000 UTC m=+0.133264262 container init 7a0a8ac6baf2b1e3293028e4f010021006c38a024ab45361a4567ebc3fed1c6a (image=quay.io/ceph/ceph:v19, name=relaxed_mclean, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:43:08 np0005475493 podman[73956]: 2025-10-08 09:43:08.968305617 +0000 UTC m=+0.139032613 container start 7a0a8ac6baf2b1e3293028e4f010021006c38a024ab45361a4567ebc3fed1c6a (image=quay.io/ceph/ceph:v19, name=relaxed_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:43:08 np0005475493 podman[73956]: 2025-10-08 09:43:08.972398134 +0000 UTC m=+0.143125130 container attach 7a0a8ac6baf2b1e3293028e4f010021006c38a024ab45361a4567ebc3fed1c6a (image=quay.io/ceph/ceph:v19, name=relaxed_mclean, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:43:09 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'mirroring'
Oct  8 05:43:09 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'nfs'
Oct  8 05:43:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct  8 05:43:09 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2977280056' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]: 
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]: {
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:    "fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:    "health": {
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "status": "HEALTH_OK",
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "checks": {},
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "mutes": []
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:    },
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:    "election_epoch": 5,
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:    "quorum": [
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        0
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:    ],
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:    "quorum_names": [
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "compute-0"
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:    ],
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:    "quorum_age": 5,
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:    "monmap": {
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "epoch": 1,
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "min_mon_release_name": "squid",
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "num_mons": 1
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:    },
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:    "osdmap": {
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "epoch": 1,
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "num_osds": 0,
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "num_up_osds": 0,
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "osd_up_since": 0,
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "num_in_osds": 0,
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "osd_in_since": 0,
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "num_remapped_pgs": 0
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:    },
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:    "pgmap": {
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "pgs_by_state": [],
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "num_pgs": 0,
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "num_pools": 0,
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "num_objects": 0,
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "data_bytes": 0,
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "bytes_used": 0,
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "bytes_avail": 0,
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "bytes_total": 0
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:    },
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:    "fsmap": {
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "epoch": 1,
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "btime": "2025-10-08T09:43:01:374245+0000",
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "by_rank": [],
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "up:standby": 0
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:    },
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:    "mgrmap": {
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "available": false,
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "num_standbys": 0,
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "modules": [
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:            "iostat",
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:            "nfs",
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:            "restful"
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        ],
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "services": {}
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:    },
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:    "servicemap": {
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "epoch": 1,
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "modified": "2025-10-08T09:43:01.375926+0000",
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:        "services": {}
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:    },
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]:    "progress_events": {}
Oct  8 05:43:09 np0005475493 relaxed_mclean[73972]: }
Oct  8 05:43:09 np0005475493 systemd[1]: libpod-7a0a8ac6baf2b1e3293028e4f010021006c38a024ab45361a4567ebc3fed1c6a.scope: Deactivated successfully.
Oct  8 05:43:09 np0005475493 podman[73956]: 2025-10-08 09:43:09.156185279 +0000 UTC m=+0.326912275 container died 7a0a8ac6baf2b1e3293028e4f010021006c38a024ab45361a4567ebc3fed1c6a (image=quay.io/ceph/ceph:v19, name=relaxed_mclean, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  8 05:43:09 np0005475493 systemd[1]: var-lib-containers-storage-overlay-edd866ae709c382715a1ce5c8068304f8e130c9fbaf8230a9b5b7a72eacebfbc-merged.mount: Deactivated successfully.
Oct  8 05:43:09 np0005475493 podman[73956]: 2025-10-08 09:43:09.222872236 +0000 UTC m=+0.393599232 container remove 7a0a8ac6baf2b1e3293028e4f010021006c38a024ab45361a4567ebc3fed1c6a (image=quay.io/ceph/ceph:v19, name=relaxed_mclean, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS)
Oct  8 05:43:09 np0005475493 systemd[1]: libpod-conmon-7a0a8ac6baf2b1e3293028e4f010021006c38a024ab45361a4567ebc3fed1c6a.scope: Deactivated successfully.
Oct  8 05:43:09 np0005475493 ceph-mgr[73869]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  8 05:43:09 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'orchestrator'
Oct  8 05:43:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:09.329+0000 7f971cc6d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  8 05:43:09 np0005475493 ceph-mgr[73869]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  8 05:43:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:09.559+0000 7f971cc6d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  8 05:43:09 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'osd_perf_query'
Oct  8 05:43:09 np0005475493 ceph-mgr[73869]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  8 05:43:09 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'osd_support'
Oct  8 05:43:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:09.637+0000 7f971cc6d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  8 05:43:09 np0005475493 ceph-mgr[73869]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  8 05:43:09 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'pg_autoscaler'
Oct  8 05:43:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:09.704+0000 7f971cc6d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  8 05:43:09 np0005475493 ceph-mgr[73869]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  8 05:43:09 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'progress'
Oct  8 05:43:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:09.784+0000 7f971cc6d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  8 05:43:09 np0005475493 ceph-mgr[73869]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  8 05:43:09 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'prometheus'
Oct  8 05:43:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:09.854+0000 7f971cc6d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  8 05:43:10 np0005475493 ceph-mgr[73869]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  8 05:43:10 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'rbd_support'
Oct  8 05:43:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:10.210+0000 7f971cc6d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  8 05:43:10 np0005475493 ceph-mgr[73869]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  8 05:43:10 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'restful'
Oct  8 05:43:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:10.305+0000 7f971cc6d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  8 05:43:10 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'rgw'
Oct  8 05:43:10 np0005475493 ceph-mgr[73869]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  8 05:43:10 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'rook'
Oct  8 05:43:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:10.721+0000 7f971cc6d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  8 05:43:11 np0005475493 ceph-mgr[73869]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  8 05:43:11 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'selftest'
Oct  8 05:43:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:11.245+0000 7f971cc6d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  8 05:43:11 np0005475493 podman[74012]: 2025-10-08 09:43:11.304731034 +0000 UTC m=+0.055518819 container create 9884b1522433a7472d7f22fc0ddbf67be2bd8fb14574b794061b1b2ac9c97796 (image=quay.io/ceph/ceph:v19, name=frosty_sinoussi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  8 05:43:11 np0005475493 ceph-mgr[73869]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  8 05:43:11 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'snap_schedule'
Oct  8 05:43:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:11.314+0000 7f971cc6d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  8 05:43:11 np0005475493 systemd[1]: Started libpod-conmon-9884b1522433a7472d7f22fc0ddbf67be2bd8fb14574b794061b1b2ac9c97796.scope.
Oct  8 05:43:11 np0005475493 podman[74012]: 2025-10-08 09:43:11.275612437 +0000 UTC m=+0.026400292 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:11 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:11 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e1a9dca2d7b30fdff36568d934a3ccab08f6e50914590d51c843954c99e8d8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:11 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e1a9dca2d7b30fdff36568d934a3ccab08f6e50914590d51c843954c99e8d8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:11 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e1a9dca2d7b30fdff36568d934a3ccab08f6e50914590d51c843954c99e8d8f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:11 np0005475493 ceph-mgr[73869]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  8 05:43:11 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'stats'
Oct  8 05:43:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:11.392+0000 7f971cc6d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  8 05:43:11 np0005475493 podman[74012]: 2025-10-08 09:43:11.395136568 +0000 UTC m=+0.145924323 container init 9884b1522433a7472d7f22fc0ddbf67be2bd8fb14574b794061b1b2ac9c97796 (image=quay.io/ceph/ceph:v19, name=frosty_sinoussi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:43:11 np0005475493 podman[74012]: 2025-10-08 09:43:11.402379782 +0000 UTC m=+0.153167527 container start 9884b1522433a7472d7f22fc0ddbf67be2bd8fb14574b794061b1b2ac9c97796 (image=quay.io/ceph/ceph:v19, name=frosty_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:11 np0005475493 podman[74012]: 2025-10-08 09:43:11.405741112 +0000 UTC m=+0.156528847 container attach 9884b1522433a7472d7f22fc0ddbf67be2bd8fb14574b794061b1b2ac9c97796 (image=quay.io/ceph/ceph:v19, name=frosty_sinoussi, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct  8 05:43:11 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'status'
Oct  8 05:43:11 np0005475493 ceph-mgr[73869]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  8 05:43:11 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'telegraf'
Oct  8 05:43:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:11.537+0000 7f971cc6d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  8 05:43:11 np0005475493 ceph-mgr[73869]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  8 05:43:11 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'telemetry'
Oct  8 05:43:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:11.601+0000 7f971cc6d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  8 05:43:11 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct  8 05:43:11 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2884133885' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]: 
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]: {
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:    "fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:    "health": {
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "status": "HEALTH_OK",
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "checks": {},
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "mutes": []
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:    },
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:    "election_epoch": 5,
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:    "quorum": [
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        0
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:    ],
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:    "quorum_names": [
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "compute-0"
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:    ],
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:    "quorum_age": 8,
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:    "monmap": {
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "epoch": 1,
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "min_mon_release_name": "squid",
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "num_mons": 1
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:    },
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:    "osdmap": {
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "epoch": 1,
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "num_osds": 0,
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "num_up_osds": 0,
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "osd_up_since": 0,
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "num_in_osds": 0,
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "osd_in_since": 0,
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "num_remapped_pgs": 0
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:    },
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:    "pgmap": {
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "pgs_by_state": [],
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "num_pgs": 0,
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "num_pools": 0,
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "num_objects": 0,
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "data_bytes": 0,
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "bytes_used": 0,
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "bytes_avail": 0,
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "bytes_total": 0
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:    },
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:    "fsmap": {
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "epoch": 1,
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "btime": "2025-10-08T09:43:01:374245+0000",
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "by_rank": [],
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "up:standby": 0
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:    },
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:    "mgrmap": {
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "available": false,
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "num_standbys": 0,
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "modules": [
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:            "iostat",
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:            "nfs",
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:            "restful"
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        ],
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "services": {}
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:    },
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:    "servicemap": {
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "epoch": 1,
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "modified": "2025-10-08T09:43:01.375926+0000",
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:        "services": {}
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:    },
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]:    "progress_events": {}
Oct  8 05:43:11 np0005475493 frosty_sinoussi[74028]: }
Oct  8 05:43:11 np0005475493 systemd[1]: libpod-9884b1522433a7472d7f22fc0ddbf67be2bd8fb14574b794061b1b2ac9c97796.scope: Deactivated successfully.
Oct  8 05:43:11 np0005475493 podman[74012]: 2025-10-08 09:43:11.621168777 +0000 UTC m=+0.371956562 container died 9884b1522433a7472d7f22fc0ddbf67be2bd8fb14574b794061b1b2ac9c97796 (image=quay.io/ceph/ceph:v19, name=frosty_sinoussi, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct  8 05:43:11 np0005475493 systemd[1]: var-lib-containers-storage-overlay-9e1a9dca2d7b30fdff36568d934a3ccab08f6e50914590d51c843954c99e8d8f-merged.mount: Deactivated successfully.
Oct  8 05:43:11 np0005475493 podman[74012]: 2025-10-08 09:43:11.654294088 +0000 UTC m=+0.405081843 container remove 9884b1522433a7472d7f22fc0ddbf67be2bd8fb14574b794061b1b2ac9c97796 (image=quay.io/ceph/ceph:v19, name=frosty_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  8 05:43:11 np0005475493 systemd[1]: libpod-conmon-9884b1522433a7472d7f22fc0ddbf67be2bd8fb14574b794061b1b2ac9c97796.scope: Deactivated successfully.
Oct  8 05:43:11 np0005475493 ceph-mgr[73869]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  8 05:43:11 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'test_orchestrator'
Oct  8 05:43:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:11.754+0000 7f971cc6d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  8 05:43:11 np0005475493 ceph-mgr[73869]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  8 05:43:11 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'volumes'
Oct  8 05:43:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:11.965+0000 7f971cc6d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'zabbix'
Oct  8 05:43:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:12.224+0000 7f971cc6d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  8 05:43:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:12.291+0000 7f971cc6d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: ms_deliver_dispatch: unhandled message 0x5613731ee9c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ixicfj
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: mgr handle_mgr_map Activating!
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: mgr handle_mgr_map I am now activating
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.ixicfj(active, starting, since 0.0120135s)
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e1 all = 1
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ixicfj", "id": "compute-0.ixicfj"} v 0)
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ixicfj", "id": "compute-0.ixicfj"}]: dispatch
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: balancer
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [balancer INFO root] Starting
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Manager daemon compute-0.ixicfj is now available
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:43:12
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [balancer INFO root] No pools available
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: crash
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: devicehealth
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [devicehealth INFO root] Starting
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: iostat
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: nfs
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: orchestrator
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: pg_autoscaler
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: progress
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [progress INFO root] Loading...
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [progress INFO root] No stored events to load
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [progress INFO root] Loaded [] historic events
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [progress INFO root] Loaded OSDMap, ready.
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] recovery thread starting
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] starting setup
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: rbd_support
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: restful
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"} v 0)
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"}]: dispatch
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [restful INFO root] server_addr: :: server_port: 8003
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: status
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: telemetry
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [restful WARNING root] server not running: no certificate configured
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] PerfHandler: starting
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TaskHandler: starting
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"} v 0)
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"}]: dispatch
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] setup complete
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Oct  8 05:43:12 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: volumes
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: Activating manager daemon compute-0.ixicfj
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: Manager daemon compute-0.ixicfj is now available
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"}]: dispatch
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"}]: dispatch
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:12 np0005475493 ceph-mon[73572]: from='mgr.14102 192.168.122.100:0/1852375988' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:13 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.ixicfj(active, since 1.02767s)
Oct  8 05:43:13 np0005475493 podman[74146]: 2025-10-08 09:43:13.729205673 +0000 UTC m=+0.047450499 container create 2700014e68075614fd90a6fc64c95230ada3c55ddaf84e071d8e788d7965dff3 (image=quay.io/ceph/ceph:v19, name=upbeat_matsumoto, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:43:13 np0005475493 systemd[1]: Started libpod-conmon-2700014e68075614fd90a6fc64c95230ada3c55ddaf84e071d8e788d7965dff3.scope.
Oct  8 05:43:13 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:13 np0005475493 podman[74146]: 2025-10-08 09:43:13.709782762 +0000 UTC m=+0.028027838 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:13 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e26238a63b155f33001ca58e8b208a28abd890ab008f881f279a48c616fb6cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:13 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e26238a63b155f33001ca58e8b208a28abd890ab008f881f279a48c616fb6cf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:13 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e26238a63b155f33001ca58e8b208a28abd890ab008f881f279a48c616fb6cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:13 np0005475493 podman[74146]: 2025-10-08 09:43:13.82432399 +0000 UTC m=+0.142568846 container init 2700014e68075614fd90a6fc64c95230ada3c55ddaf84e071d8e788d7965dff3 (image=quay.io/ceph/ceph:v19, name=upbeat_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Oct  8 05:43:13 np0005475493 podman[74146]: 2025-10-08 09:43:13.834926313 +0000 UTC m=+0.153171169 container start 2700014e68075614fd90a6fc64c95230ada3c55ddaf84e071d8e788d7965dff3 (image=quay.io/ceph/ceph:v19, name=upbeat_matsumoto, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:13 np0005475493 podman[74146]: 2025-10-08 09:43:13.8402773 +0000 UTC m=+0.158522276 container attach 2700014e68075614fd90a6fc64c95230ada3c55ddaf84e071d8e788d7965dff3 (image=quay.io/ceph/ceph:v19, name=upbeat_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Oct  8 05:43:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct  8 05:43:14 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2940749137' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]: 
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]: {
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:    "fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:    "health": {
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "status": "HEALTH_OK",
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "checks": {},
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "mutes": []
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:    },
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:    "election_epoch": 5,
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:    "quorum": [
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        0
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:    ],
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:    "quorum_names": [
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "compute-0"
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:    ],
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:    "quorum_age": 10,
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:    "monmap": {
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "epoch": 1,
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "min_mon_release_name": "squid",
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "num_mons": 1
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:    },
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:    "osdmap": {
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "epoch": 1,
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "num_osds": 0,
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "num_up_osds": 0,
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "osd_up_since": 0,
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "num_in_osds": 0,
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "osd_in_since": 0,
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "num_remapped_pgs": 0
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:    },
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:    "pgmap": {
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "pgs_by_state": [],
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "num_pgs": 0,
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "num_pools": 0,
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "num_objects": 0,
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "data_bytes": 0,
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "bytes_used": 0,
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "bytes_avail": 0,
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "bytes_total": 0
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:    },
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:    "fsmap": {
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "epoch": 1,
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "btime": "2025-10-08T09:43:01:374245+0000",
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "by_rank": [],
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "up:standby": 0
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:    },
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:    "mgrmap": {
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "available": true,
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "num_standbys": 0,
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "modules": [
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:            "iostat",
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:            "nfs",
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:            "restful"
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        ],
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "services": {}
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:    },
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:    "servicemap": {
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "epoch": 1,
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "modified": "2025-10-08T09:43:01.375926+0000",
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:        "services": {}
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:    },
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]:    "progress_events": {}
Oct  8 05:43:14 np0005475493 upbeat_matsumoto[74162]: }
Oct  8 05:43:14 np0005475493 systemd[1]: libpod-2700014e68075614fd90a6fc64c95230ada3c55ddaf84e071d8e788d7965dff3.scope: Deactivated successfully.
Oct  8 05:43:14 np0005475493 podman[74146]: 2025-10-08 09:43:14.260188642 +0000 UTC m=+0.578433458 container died 2700014e68075614fd90a6fc64c95230ada3c55ddaf84e071d8e788d7965dff3 (image=quay.io/ceph/ceph:v19, name=upbeat_matsumoto, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  8 05:43:14 np0005475493 systemd[1]: var-lib-containers-storage-overlay-5e26238a63b155f33001ca58e8b208a28abd890ab008f881f279a48c616fb6cf-merged.mount: Deactivated successfully.
Oct  8 05:43:14 np0005475493 podman[74146]: 2025-10-08 09:43:14.306942603 +0000 UTC m=+0.625187449 container remove 2700014e68075614fd90a6fc64c95230ada3c55ddaf84e071d8e788d7965dff3 (image=quay.io/ceph/ceph:v19, name=upbeat_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  8 05:43:14 np0005475493 ceph-mgr[73869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  8 05:43:14 np0005475493 systemd[1]: libpod-conmon-2700014e68075614fd90a6fc64c95230ada3c55ddaf84e071d8e788d7965dff3.scope: Deactivated successfully.
Oct  8 05:43:14 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.ixicfj(active, since 2s)
Oct  8 05:43:14 np0005475493 podman[74200]: 2025-10-08 09:43:14.378969677 +0000 UTC m=+0.054344479 container create 88e82cc624ffda0f2e43bec2b43e2df7bfdc6346edc3186ac606fae237b2a93d (image=quay.io/ceph/ceph:v19, name=pedantic_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  8 05:43:14 np0005475493 systemd[1]: Started libpod-conmon-88e82cc624ffda0f2e43bec2b43e2df7bfdc6346edc3186ac606fae237b2a93d.scope.
Oct  8 05:43:14 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:14 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d3020fc614a1361225c5e7906cd77f674cc54866ae7544ace2b8e496ab1701/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:14 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d3020fc614a1361225c5e7906cd77f674cc54866ae7544ace2b8e496ab1701/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:14 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d3020fc614a1361225c5e7906cd77f674cc54866ae7544ace2b8e496ab1701/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:14 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d3020fc614a1361225c5e7906cd77f674cc54866ae7544ace2b8e496ab1701/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:14 np0005475493 podman[74200]: 2025-10-08 09:43:14.352537015 +0000 UTC m=+0.027911857 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:14 np0005475493 podman[74200]: 2025-10-08 09:43:14.476422124 +0000 UTC m=+0.151796966 container init 88e82cc624ffda0f2e43bec2b43e2df7bfdc6346edc3186ac606fae237b2a93d (image=quay.io/ceph/ceph:v19, name=pedantic_grothendieck, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct  8 05:43:14 np0005475493 podman[74200]: 2025-10-08 09:43:14.486042958 +0000 UTC m=+0.161417720 container start 88e82cc624ffda0f2e43bec2b43e2df7bfdc6346edc3186ac606fae237b2a93d (image=quay.io/ceph/ceph:v19, name=pedantic_grothendieck, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:43:14 np0005475493 podman[74200]: 2025-10-08 09:43:14.48958729 +0000 UTC m=+0.164962132 container attach 88e82cc624ffda0f2e43bec2b43e2df7bfdc6346edc3186ac606fae237b2a93d (image=quay.io/ceph/ceph:v19, name=pedantic_grothendieck, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True)
Oct  8 05:43:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Oct  8 05:43:14 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/523082670' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  8 05:43:14 np0005475493 pedantic_grothendieck[74216]: 
Oct  8 05:43:14 np0005475493 pedantic_grothendieck[74216]: [global]
Oct  8 05:43:14 np0005475493 pedantic_grothendieck[74216]: #011fsid = 787292cc-8154-50c4-9e00-e9be3e817149
Oct  8 05:43:14 np0005475493 pedantic_grothendieck[74216]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct  8 05:43:14 np0005475493 systemd[1]: libpod-88e82cc624ffda0f2e43bec2b43e2df7bfdc6346edc3186ac606fae237b2a93d.scope: Deactivated successfully.
Oct  8 05:43:14 np0005475493 conmon[74216]: conmon 88e82cc624ffda0f2e43 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-88e82cc624ffda0f2e43bec2b43e2df7bfdc6346edc3186ac606fae237b2a93d.scope/container/memory.events
Oct  8 05:43:14 np0005475493 podman[74200]: 2025-10-08 09:43:14.847732509 +0000 UTC m=+0.523107331 container died 88e82cc624ffda0f2e43bec2b43e2df7bfdc6346edc3186ac606fae237b2a93d (image=quay.io/ceph/ceph:v19, name=pedantic_grothendieck, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:43:14 np0005475493 systemd[1]: var-lib-containers-storage-overlay-a3d3020fc614a1361225c5e7906cd77f674cc54866ae7544ace2b8e496ab1701-merged.mount: Deactivated successfully.
Oct  8 05:43:14 np0005475493 podman[74200]: 2025-10-08 09:43:14.896629969 +0000 UTC m=+0.572004761 container remove 88e82cc624ffda0f2e43bec2b43e2df7bfdc6346edc3186ac606fae237b2a93d (image=quay.io/ceph/ceph:v19, name=pedantic_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:43:14 np0005475493 systemd[1]: libpod-conmon-88e82cc624ffda0f2e43bec2b43e2df7bfdc6346edc3186ac606fae237b2a93d.scope: Deactivated successfully.
Oct  8 05:43:14 np0005475493 podman[74255]: 2025-10-08 09:43:14.952243047 +0000 UTC m=+0.036245899 container create c3dae8c46d316b89d5f068d9b2ce85fbaba67e859402475546ea3bca19213d55 (image=quay.io/ceph/ceph:v19, name=recursing_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  8 05:43:14 np0005475493 systemd[1]: Started libpod-conmon-c3dae8c46d316b89d5f068d9b2ce85fbaba67e859402475546ea3bca19213d55.scope.
Oct  8 05:43:15 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:15 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/141243ac16d31f61cffa008ee4b1e1da808e7a15b6a2df374c04e3f043a3dc30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:15 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/141243ac16d31f61cffa008ee4b1e1da808e7a15b6a2df374c04e3f043a3dc30/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:15 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/141243ac16d31f61cffa008ee4b1e1da808e7a15b6a2df374c04e3f043a3dc30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:15 np0005475493 podman[74255]: 2025-10-08 09:43:15.029021222 +0000 UTC m=+0.113024094 container init c3dae8c46d316b89d5f068d9b2ce85fbaba67e859402475546ea3bca19213d55 (image=quay.io/ceph/ceph:v19, name=recursing_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  8 05:43:15 np0005475493 podman[74255]: 2025-10-08 09:43:14.936125576 +0000 UTC m=+0.020128478 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:15 np0005475493 podman[74255]: 2025-10-08 09:43:15.033711623 +0000 UTC m=+0.117714475 container start c3dae8c46d316b89d5f068d9b2ce85fbaba67e859402475546ea3bca19213d55 (image=quay.io/ceph/ceph:v19, name=recursing_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  8 05:43:15 np0005475493 podman[74255]: 2025-10-08 09:43:15.037078423 +0000 UTC m=+0.121081275 container attach c3dae8c46d316b89d5f068d9b2ce85fbaba67e859402475546ea3bca19213d55 (image=quay.io/ceph/ceph:v19, name=recursing_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:43:15 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/523082670' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  8 05:43:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Oct  8 05:43:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1396328474' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct  8 05:43:16 np0005475493 ceph-mgr[73869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  8 05:43:16 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/1396328474' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct  8 05:43:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1396328474' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct  8 05:43:16 np0005475493 ceph-mgr[73869]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  8 05:43:16 np0005475493 ceph-mgr[73869]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct  8 05:43:16 np0005475493 ceph-mgr[73869]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct  8 05:43:16 np0005475493 ceph-mgr[73869]: mgr respawn  1: '-n'
Oct  8 05:43:16 np0005475493 ceph-mgr[73869]: mgr respawn  2: 'mgr.compute-0.ixicfj'
Oct  8 05:43:16 np0005475493 ceph-mgr[73869]: mgr respawn  3: '-f'
Oct  8 05:43:16 np0005475493 ceph-mgr[73869]: mgr respawn  4: '--setuser'
Oct  8 05:43:16 np0005475493 ceph-mgr[73869]: mgr respawn  5: 'ceph'
Oct  8 05:43:16 np0005475493 ceph-mgr[73869]: mgr respawn  6: '--setgroup'
Oct  8 05:43:16 np0005475493 ceph-mgr[73869]: mgr respawn  7: 'ceph'
Oct  8 05:43:16 np0005475493 ceph-mgr[73869]: mgr respawn  8: '--default-log-to-file=false'
Oct  8 05:43:16 np0005475493 ceph-mgr[73869]: mgr respawn  9: '--default-log-to-journald=true'
Oct  8 05:43:16 np0005475493 ceph-mgr[73869]: mgr respawn  10: '--default-log-to-stderr=false'
Oct  8 05:43:16 np0005475493 ceph-mgr[73869]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct  8 05:43:16 np0005475493 ceph-mgr[73869]: mgr respawn  exe_path /proc/self/exe
Oct  8 05:43:16 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.ixicfj(active, since 4s)
Oct  8 05:43:16 np0005475493 systemd[1]: libpod-c3dae8c46d316b89d5f068d9b2ce85fbaba67e859402475546ea3bca19213d55.scope: Deactivated successfully.
Oct  8 05:43:16 np0005475493 podman[74255]: 2025-10-08 09:43:16.373730016 +0000 UTC m=+1.457732888 container died c3dae8c46d316b89d5f068d9b2ce85fbaba67e859402475546ea3bca19213d55 (image=quay.io/ceph/ceph:v19, name=recursing_bouman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct  8 05:43:16 np0005475493 systemd[1]: var-lib-containers-storage-overlay-141243ac16d31f61cffa008ee4b1e1da808e7a15b6a2df374c04e3f043a3dc30-merged.mount: Deactivated successfully.
Oct  8 05:43:16 np0005475493 podman[74255]: 2025-10-08 09:43:16.413057542 +0000 UTC m=+1.497060394 container remove c3dae8c46d316b89d5f068d9b2ce85fbaba67e859402475546ea3bca19213d55 (image=quay.io/ceph/ceph:v19, name=recursing_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:16 np0005475493 systemd[1]: libpod-conmon-c3dae8c46d316b89d5f068d9b2ce85fbaba67e859402475546ea3bca19213d55.scope: Deactivated successfully.
Oct  8 05:43:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ignoring --setuser ceph since I am not root
Oct  8 05:43:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ignoring --setgroup ceph since I am not root
Oct  8 05:43:16 np0005475493 ceph-mgr[73869]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  8 05:43:16 np0005475493 ceph-mgr[73869]: pidfile_write: ignore empty --pid-file
Oct  8 05:43:16 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'alerts'
Oct  8 05:43:16 np0005475493 podman[74309]: 2025-10-08 09:43:16.491356971 +0000 UTC m=+0.055902303 container create aa18a1faf3e80ce6853e7375bd4a75f6f053a0368b9914f847bd55dcb6aa98fd (image=quay.io/ceph/ceph:v19, name=amazing_darwin, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:43:16 np0005475493 systemd[1]: Started libpod-conmon-aa18a1faf3e80ce6853e7375bd4a75f6f053a0368b9914f847bd55dcb6aa98fd.scope.
Oct  8 05:43:16 np0005475493 podman[74309]: 2025-10-08 09:43:16.463416345 +0000 UTC m=+0.027961737 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:16 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:16 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba485fb532334a40119488da65102cb36200898a705131ab64373b2284784415/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:16 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba485fb532334a40119488da65102cb36200898a705131ab64373b2284784415/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:16 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba485fb532334a40119488da65102cb36200898a705131ab64373b2284784415/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:16 np0005475493 ceph-mgr[73869]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  8 05:43:16 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'balancer'
Oct  8 05:43:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:16.576+0000 7fa8781df140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  8 05:43:16 np0005475493 podman[74309]: 2025-10-08 09:43:16.581300612 +0000 UTC m=+0.145845974 container init aa18a1faf3e80ce6853e7375bd4a75f6f053a0368b9914f847bd55dcb6aa98fd (image=quay.io/ceph/ceph:v19, name=amazing_darwin, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  8 05:43:16 np0005475493 podman[74309]: 2025-10-08 09:43:16.590609794 +0000 UTC m=+0.155155086 container start aa18a1faf3e80ce6853e7375bd4a75f6f053a0368b9914f847bd55dcb6aa98fd (image=quay.io/ceph/ceph:v19, name=amazing_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct  8 05:43:16 np0005475493 podman[74309]: 2025-10-08 09:43:16.594542398 +0000 UTC m=+0.159087770 container attach aa18a1faf3e80ce6853e7375bd4a75f6f053a0368b9914f847bd55dcb6aa98fd (image=quay.io/ceph/ceph:v19, name=amazing_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  8 05:43:16 np0005475493 ceph-mgr[73869]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  8 05:43:16 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'cephadm'
Oct  8 05:43:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:16.662+0000 7fa8781df140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  8 05:43:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct  8 05:43:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1658657222' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  8 05:43:17 np0005475493 amazing_darwin[74345]: {
Oct  8 05:43:17 np0005475493 amazing_darwin[74345]:    "epoch": 5,
Oct  8 05:43:17 np0005475493 amazing_darwin[74345]:    "available": true,
Oct  8 05:43:17 np0005475493 amazing_darwin[74345]:    "active_name": "compute-0.ixicfj",
Oct  8 05:43:17 np0005475493 amazing_darwin[74345]:    "num_standby": 0
Oct  8 05:43:17 np0005475493 amazing_darwin[74345]: }
Oct  8 05:43:17 np0005475493 systemd[1]: libpod-aa18a1faf3e80ce6853e7375bd4a75f6f053a0368b9914f847bd55dcb6aa98fd.scope: Deactivated successfully.
Oct  8 05:43:17 np0005475493 podman[74309]: 2025-10-08 09:43:17.047496831 +0000 UTC m=+0.612042123 container died aa18a1faf3e80ce6853e7375bd4a75f6f053a0368b9914f847bd55dcb6aa98fd (image=quay.io/ceph/ceph:v19, name=amazing_darwin, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True)
Oct  8 05:43:17 np0005475493 systemd[1]: var-lib-containers-storage-overlay-ba485fb532334a40119488da65102cb36200898a705131ab64373b2284784415-merged.mount: Deactivated successfully.
Oct  8 05:43:17 np0005475493 podman[74309]: 2025-10-08 09:43:17.082781192 +0000 UTC m=+0.647326484 container remove aa18a1faf3e80ce6853e7375bd4a75f6f053a0368b9914f847bd55dcb6aa98fd (image=quay.io/ceph/ceph:v19, name=amazing_darwin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:43:17 np0005475493 systemd[1]: libpod-conmon-aa18a1faf3e80ce6853e7375bd4a75f6f053a0368b9914f847bd55dcb6aa98fd.scope: Deactivated successfully.
Oct  8 05:43:17 np0005475493 podman[74394]: 2025-10-08 09:43:17.145970137 +0000 UTC m=+0.043105539 container create a9737eb4c6a8f2df8e88719d28fc2d7997e0910b8043b193980c1f679cb0b7be (image=quay.io/ceph/ceph:v19, name=jovial_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:43:17 np0005475493 systemd[1]: Started libpod-conmon-a9737eb4c6a8f2df8e88719d28fc2d7997e0910b8043b193980c1f679cb0b7be.scope.
Oct  8 05:43:17 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:17 np0005475493 podman[74394]: 2025-10-08 09:43:17.124801412 +0000 UTC m=+0.021936854 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:17 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79ae716067f1d69df891c3da9889fe6871f99ff24fc4112052e47f11a064c52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:17 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79ae716067f1d69df891c3da9889fe6871f99ff24fc4112052e47f11a064c52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:17 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79ae716067f1d69df891c3da9889fe6871f99ff24fc4112052e47f11a064c52/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:17 np0005475493 podman[74394]: 2025-10-08 09:43:17.242960381 +0000 UTC m=+0.140095853 container init a9737eb4c6a8f2df8e88719d28fc2d7997e0910b8043b193980c1f679cb0b7be (image=quay.io/ceph/ceph:v19, name=jovial_shannon, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  8 05:43:17 np0005475493 podman[74394]: 2025-10-08 09:43:17.252641196 +0000 UTC m=+0.149776618 container start a9737eb4c6a8f2df8e88719d28fc2d7997e0910b8043b193980c1f679cb0b7be (image=quay.io/ceph/ceph:v19, name=jovial_shannon, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  8 05:43:17 np0005475493 podman[74394]: 2025-10-08 09:43:17.257160636 +0000 UTC m=+0.154296028 container attach a9737eb4c6a8f2df8e88719d28fc2d7997e0910b8043b193980c1f679cb0b7be (image=quay.io/ceph/ceph:v19, name=jovial_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:17 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/1396328474' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct  8 05:43:17 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'crash'
Oct  8 05:43:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:17.469+0000 7fa8781df140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  8 05:43:17 np0005475493 ceph-mgr[73869]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  8 05:43:17 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'dashboard'
Oct  8 05:43:18 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'devicehealth'
Oct  8 05:43:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:18.098+0000 7fa8781df140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  8 05:43:18 np0005475493 ceph-mgr[73869]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  8 05:43:18 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'diskprediction_local'
Oct  8 05:43:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  8 05:43:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  8 05:43:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:  from numpy import show_config as show_numpy_config
Oct  8 05:43:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:18.261+0000 7fa8781df140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  8 05:43:18 np0005475493 ceph-mgr[73869]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  8 05:43:18 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'influx'
Oct  8 05:43:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:18.338+0000 7fa8781df140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  8 05:43:18 np0005475493 ceph-mgr[73869]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  8 05:43:18 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'insights'
Oct  8 05:43:18 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'iostat'
Oct  8 05:43:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:18.468+0000 7fa8781df140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  8 05:43:18 np0005475493 ceph-mgr[73869]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  8 05:43:18 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'k8sevents'
Oct  8 05:43:18 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'localpool'
Oct  8 05:43:18 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'mds_autoscaler'
Oct  8 05:43:19 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'mirroring'
Oct  8 05:43:19 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'nfs'
Oct  8 05:43:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:19.436+0000 7fa8781df140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  8 05:43:19 np0005475493 ceph-mgr[73869]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  8 05:43:19 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'orchestrator'
Oct  8 05:43:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:19.680+0000 7fa8781df140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  8 05:43:19 np0005475493 ceph-mgr[73869]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  8 05:43:19 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'osd_perf_query'
Oct  8 05:43:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:19.768+0000 7fa8781df140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  8 05:43:19 np0005475493 ceph-mgr[73869]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  8 05:43:19 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'osd_support'
Oct  8 05:43:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:19.880+0000 7fa8781df140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  8 05:43:19 np0005475493 ceph-mgr[73869]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  8 05:43:19 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'pg_autoscaler'
Oct  8 05:43:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:19.960+0000 7fa8781df140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  8 05:43:19 np0005475493 ceph-mgr[73869]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  8 05:43:19 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'progress'
Oct  8 05:43:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:20.035+0000 7fa8781df140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  8 05:43:20 np0005475493 ceph-mgr[73869]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  8 05:43:20 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'prometheus'
Oct  8 05:43:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:20.375+0000 7fa8781df140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  8 05:43:20 np0005475493 ceph-mgr[73869]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  8 05:43:20 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'rbd_support'
Oct  8 05:43:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:20.475+0000 7fa8781df140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  8 05:43:20 np0005475493 ceph-mgr[73869]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  8 05:43:20 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'restful'
Oct  8 05:43:20 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'rgw'
Oct  8 05:43:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:20.894+0000 7fa8781df140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  8 05:43:20 np0005475493 ceph-mgr[73869]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  8 05:43:20 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'rook'
Oct  8 05:43:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:21.447+0000 7fa8781df140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  8 05:43:21 np0005475493 ceph-mgr[73869]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  8 05:43:21 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'selftest'
Oct  8 05:43:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:21.513+0000 7fa8781df140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  8 05:43:21 np0005475493 ceph-mgr[73869]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  8 05:43:21 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'snap_schedule'
Oct  8 05:43:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:21.588+0000 7fa8781df140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  8 05:43:21 np0005475493 ceph-mgr[73869]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  8 05:43:21 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'stats'
Oct  8 05:43:21 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'status'
Oct  8 05:43:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:21.731+0000 7fa8781df140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  8 05:43:21 np0005475493 ceph-mgr[73869]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  8 05:43:21 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'telegraf'
Oct  8 05:43:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:21.798+0000 7fa8781df140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  8 05:43:21 np0005475493 ceph-mgr[73869]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  8 05:43:21 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'telemetry'
Oct  8 05:43:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:21.941+0000 7fa8781df140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  8 05:43:21 np0005475493 ceph-mgr[73869]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  8 05:43:21 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'test_orchestrator'
Oct  8 05:43:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:22.158+0000 7fa8781df140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'volumes'
Oct  8 05:43:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:22.432+0000 7fa8781df140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'zabbix'
Oct  8 05:43:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:43:22.511+0000 7fa8781df140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ixicfj restarted
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ixicfj
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: ms_deliver_dispatch: unhandled message 0x5624c2d68d00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: mgr handle_mgr_map Activating!
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: mgr handle_mgr_map I am now activating
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.ixicfj(active, starting, since 0.0133128s)
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ixicfj", "id": "compute-0.ixicfj"} v 0)
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ixicfj", "id": "compute-0.ixicfj"}]: dispatch
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e1 all = 1
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: balancer
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [balancer INFO root] Starting
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Manager daemon compute-0.ixicfj is now available
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:43:22
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [balancer INFO root] No pools available
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: cephadm
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: crash
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: Active manager daemon compute-0.ixicfj restarted
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: Activating manager daemon compute-0.ixicfj
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: Manager daemon compute-0.ixicfj is now available
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: devicehealth
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [devicehealth INFO root] Starting
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: iostat
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: nfs
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: orchestrator
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: pg_autoscaler
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: progress
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [progress INFO root] Loading...
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [progress INFO root] No stored events to load
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [progress INFO root] Loaded [] historic events
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [progress INFO root] Loaded OSDMap, ready.
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] recovery thread starting
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] starting setup
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: rbd_support
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: restful
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"} v 0)
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"}]: dispatch
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: status
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [restful INFO root] server_addr: :: server_port: 8003
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] PerfHandler: starting
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [restful WARNING root] server not running: no certificate configured
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TaskHandler: starting
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: telemetry
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"} v 0)
Oct  8 05:43:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"}]: dispatch
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] setup complete
Oct  8 05:43:22 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: volumes
Oct  8 05:43:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Oct  8 05:43:23 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Oct  8 05:43:23 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019931263 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:43:23 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.ixicfj(active, since 1.02644s)
Oct  8 05:43:23 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct  8 05:43:23 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct  8 05:43:23 np0005475493 jovial_shannon[74410]: {
Oct  8 05:43:23 np0005475493 jovial_shannon[74410]:    "mgrmap_epoch": 7,
Oct  8 05:43:23 np0005475493 jovial_shannon[74410]:    "initialized": true
Oct  8 05:43:23 np0005475493 jovial_shannon[74410]: }
Oct  8 05:43:23 np0005475493 systemd[1]: libpod-a9737eb4c6a8f2df8e88719d28fc2d7997e0910b8043b193980c1f679cb0b7be.scope: Deactivated successfully.
Oct  8 05:43:23 np0005475493 podman[74394]: 2025-10-08 09:43:23.567244402 +0000 UTC m=+6.464379794 container died a9737eb4c6a8f2df8e88719d28fc2d7997e0910b8043b193980c1f679cb0b7be (image=quay.io/ceph/ceph:v19, name=jovial_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct  8 05:43:23 np0005475493 ceph-mon[73572]: Found migration_current of "None". Setting to last migration.
Oct  8 05:43:23 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:23 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"}]: dispatch
Oct  8 05:43:23 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"}]: dispatch
Oct  8 05:43:23 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:23 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:23 np0005475493 systemd[1]: var-lib-containers-storage-overlay-c79ae716067f1d69df891c3da9889fe6871f99ff24fc4112052e47f11a064c52-merged.mount: Deactivated successfully.
Oct  8 05:43:23 np0005475493 podman[74394]: 2025-10-08 09:43:23.604844342 +0000 UTC m=+6.501979734 container remove a9737eb4c6a8f2df8e88719d28fc2d7997e0910b8043b193980c1f679cb0b7be (image=quay.io/ceph/ceph:v19, name=jovial_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:43:23 np0005475493 systemd[1]: libpod-conmon-a9737eb4c6a8f2df8e88719d28fc2d7997e0910b8043b193980c1f679cb0b7be.scope: Deactivated successfully.
Oct  8 05:43:23 np0005475493 podman[74559]: 2025-10-08 09:43:23.661079118 +0000 UTC m=+0.034740611 container create e31796ebb4b924e8cb95ade968d4ae798dfc1bf2c58f535ab73ec61827d5b874 (image=quay.io/ceph/ceph:v19, name=jovial_dhawan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:43:23 np0005475493 systemd[1]: Started libpod-conmon-e31796ebb4b924e8cb95ade968d4ae798dfc1bf2c58f535ab73ec61827d5b874.scope.
Oct  8 05:43:23 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79590e1f10f35c727f63955059a72e05936128d55f1de85dc5ebe9023c559a7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79590e1f10f35c727f63955059a72e05936128d55f1de85dc5ebe9023c559a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79590e1f10f35c727f63955059a72e05936128d55f1de85dc5ebe9023c559a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:23 np0005475493 podman[74559]: 2025-10-08 09:43:23.735096332 +0000 UTC m=+0.108757885 container init e31796ebb4b924e8cb95ade968d4ae798dfc1bf2c58f535ab73ec61827d5b874 (image=quay.io/ceph/ceph:v19, name=jovial_dhawan, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct  8 05:43:23 np0005475493 podman[74559]: 2025-10-08 09:43:23.739946927 +0000 UTC m=+0.113608420 container start e31796ebb4b924e8cb95ade968d4ae798dfc1bf2c58f535ab73ec61827d5b874 (image=quay.io/ceph/ceph:v19, name=jovial_dhawan, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:43:23 np0005475493 podman[74559]: 2025-10-08 09:43:23.645619755 +0000 UTC m=+0.019281268 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:23 np0005475493 podman[74559]: 2025-10-08 09:43:23.744078728 +0000 UTC m=+0.117740271 container attach e31796ebb4b924e8cb95ade968d4ae798dfc1bf2c58f535ab73ec61827d5b874 (image=quay.io/ceph/ceph:v19, name=jovial_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct  8 05:43:24 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 05:43:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Oct  8 05:43:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct  8 05:43:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  8 05:43:24 np0005475493 systemd[1]: libpod-e31796ebb4b924e8cb95ade968d4ae798dfc1bf2c58f535ab73ec61827d5b874.scope: Deactivated successfully.
Oct  8 05:43:24 np0005475493 podman[74559]: 2025-10-08 09:43:24.102939689 +0000 UTC m=+0.476601192 container died e31796ebb4b924e8cb95ade968d4ae798dfc1bf2c58f535ab73ec61827d5b874 (image=quay.io/ceph/ceph:v19, name=jovial_dhawan, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:43:24 np0005475493 systemd[1]: var-lib-containers-storage-overlay-b79590e1f10f35c727f63955059a72e05936128d55f1de85dc5ebe9023c559a7-merged.mount: Deactivated successfully.
Oct  8 05:43:24 np0005475493 podman[74559]: 2025-10-08 09:43:24.136328735 +0000 UTC m=+0.509990228 container remove e31796ebb4b924e8cb95ade968d4ae798dfc1bf2c58f535ab73ec61827d5b874 (image=quay.io/ceph/ceph:v19, name=jovial_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:24 np0005475493 systemd[1]: libpod-conmon-e31796ebb4b924e8cb95ade968d4ae798dfc1bf2c58f535ab73ec61827d5b874.scope: Deactivated successfully.
Oct  8 05:43:24 np0005475493 podman[74614]: 2025-10-08 09:43:24.193834632 +0000 UTC m=+0.038478321 container create db86b883e3fc4cf584317c61f71c57a8cfa013e13a24a76236014539bb80485f (image=quay.io/ceph/ceph:v19, name=vigorous_heyrovsky, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  8 05:43:24 np0005475493 systemd[1]: Started libpod-conmon-db86b883e3fc4cf584317c61f71c57a8cfa013e13a24a76236014539bb80485f.scope.
Oct  8 05:43:24 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c9b60f63031d8ff53300db5cccb70617253d40b9aeaa4963e5af21e63ada0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c9b60f63031d8ff53300db5cccb70617253d40b9aeaa4963e5af21e63ada0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c9b60f63031d8ff53300db5cccb70617253d40b9aeaa4963e5af21e63ada0c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:24 np0005475493 podman[74614]: 2025-10-08 09:43:24.260728298 +0000 UTC m=+0.105372027 container init db86b883e3fc4cf584317c61f71c57a8cfa013e13a24a76236014539bb80485f (image=quay.io/ceph/ceph:v19, name=vigorous_heyrovsky, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  8 05:43:24 np0005475493 podman[74614]: 2025-10-08 09:43:24.270598723 +0000 UTC m=+0.115242422 container start db86b883e3fc4cf584317c61f71c57a8cfa013e13a24a76236014539bb80485f (image=quay.io/ceph/ceph:v19, name=vigorous_heyrovsky, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  8 05:43:24 np0005475493 podman[74614]: 2025-10-08 09:43:24.175428334 +0000 UTC m=+0.020072063 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:24 np0005475493 podman[74614]: 2025-10-08 09:43:24.273539637 +0000 UTC m=+0.118183326 container attach db86b883e3fc4cf584317c61f71c57a8cfa013e13a24a76236014539bb80485f (image=quay.io/ceph/ceph:v19, name=vigorous_heyrovsky, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  8 05:43:24 np0005475493 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:43:24] ENGINE Bus STARTING
Oct  8 05:43:24 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:43:24] ENGINE Bus STARTING
Oct  8 05:43:24 np0005475493 ceph-mgr[73869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  8 05:43:24 np0005475493 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:43:24] ENGINE Serving on http://192.168.122.100:8765
Oct  8 05:43:24 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:43:24] ENGINE Serving on http://192.168.122.100:8765
Oct  8 05:43:24 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 05:43:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Oct  8 05:43:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:24 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Set ssh ssh_user
Oct  8 05:43:24 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Oct  8 05:43:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Oct  8 05:43:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:24 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Set ssh ssh_config
Oct  8 05:43:24 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Oct  8 05:43:24 np0005475493 ceph-mgr[73869]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Oct  8 05:43:24 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Oct  8 05:43:24 np0005475493 vigorous_heyrovsky[74631]: ssh user set to ceph-admin. sudo will be used
Oct  8 05:43:24 np0005475493 systemd[1]: libpod-db86b883e3fc4cf584317c61f71c57a8cfa013e13a24a76236014539bb80485f.scope: Deactivated successfully.
Oct  8 05:43:24 np0005475493 podman[74614]: 2025-10-08 09:43:24.626485608 +0000 UTC m=+0.471129337 container died db86b883e3fc4cf584317c61f71c57a8cfa013e13a24a76236014539bb80485f (image=quay.io/ceph/ceph:v19, name=vigorous_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  8 05:43:24 np0005475493 systemd[1]: var-lib-containers-storage-overlay-45c9b60f63031d8ff53300db5cccb70617253d40b9aeaa4963e5af21e63ada0c-merged.mount: Deactivated successfully.
Oct  8 05:43:24 np0005475493 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:43:24] ENGINE Serving on https://192.168.122.100:7150
Oct  8 05:43:24 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:43:24] ENGINE Serving on https://192.168.122.100:7150
Oct  8 05:43:24 np0005475493 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:43:24] ENGINE Bus STARTED
Oct  8 05:43:24 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:43:24] ENGINE Bus STARTED
Oct  8 05:43:24 np0005475493 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:43:24] ENGINE Client ('192.168.122.100', 46604) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  8 05:43:24 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:43:24] ENGINE Client ('192.168.122.100', 46604) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  8 05:43:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct  8 05:43:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  8 05:43:24 np0005475493 podman[74614]: 2025-10-08 09:43:24.673597763 +0000 UTC m=+0.518241482 container remove db86b883e3fc4cf584317c61f71c57a8cfa013e13a24a76236014539bb80485f (image=quay.io/ceph/ceph:v19, name=vigorous_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct  8 05:43:24 np0005475493 systemd[1]: libpod-conmon-db86b883e3fc4cf584317c61f71c57a8cfa013e13a24a76236014539bb80485f.scope: Deactivated successfully.
Oct  8 05:43:24 np0005475493 podman[74692]: 2025-10-08 09:43:24.764162635 +0000 UTC m=+0.061802695 container create 39b3822a074ca99a8415127ebae35721aaa00bae342a460076d7baf04ff4a288 (image=quay.io/ceph/ceph:v19, name=musing_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  8 05:43:24 np0005475493 systemd[1]: Started libpod-conmon-39b3822a074ca99a8415127ebae35721aaa00bae342a460076d7baf04ff4a288.scope.
Oct  8 05:43:24 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b6d830759cb0ccf205899dc1eb794fe516099495807c69f92beacfec569c58/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b6d830759cb0ccf205899dc1eb794fe516099495807c69f92beacfec569c58/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b6d830759cb0ccf205899dc1eb794fe516099495807c69f92beacfec569c58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b6d830759cb0ccf205899dc1eb794fe516099495807c69f92beacfec569c58/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b6d830759cb0ccf205899dc1eb794fe516099495807c69f92beacfec569c58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:24 np0005475493 podman[74692]: 2025-10-08 09:43:24.739513138 +0000 UTC m=+0.037153288 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:24 np0005475493 podman[74692]: 2025-10-08 09:43:24.840688849 +0000 UTC m=+0.138328919 container init 39b3822a074ca99a8415127ebae35721aaa00bae342a460076d7baf04ff4a288 (image=quay.io/ceph/ceph:v19, name=musing_shockley, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  8 05:43:24 np0005475493 podman[74692]: 2025-10-08 09:43:24.848338863 +0000 UTC m=+0.145978923 container start 39b3822a074ca99a8415127ebae35721aaa00bae342a460076d7baf04ff4a288 (image=quay.io/ceph/ceph:v19, name=musing_shockley, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:43:24 np0005475493 podman[74692]: 2025-10-08 09:43:24.851651659 +0000 UTC m=+0.149291719 container attach 39b3822a074ca99a8415127ebae35721aaa00bae342a460076d7baf04ff4a288 (image=quay.io/ceph/ceph:v19, name=musing_shockley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  8 05:43:25 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:25 np0005475493 ceph-mon[73572]: [08/Oct/2025:09:43:24] ENGINE Bus STARTING
Oct  8 05:43:25 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:25 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:25 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.ixicfj(active, since 2s)
Oct  8 05:43:25 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 05:43:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Oct  8 05:43:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:25 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Set ssh ssh_identity_key
Oct  8 05:43:25 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Oct  8 05:43:25 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Set ssh private key
Oct  8 05:43:25 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Set ssh private key
Oct  8 05:43:25 np0005475493 systemd[1]: libpod-39b3822a074ca99a8415127ebae35721aaa00bae342a460076d7baf04ff4a288.scope: Deactivated successfully.
Oct  8 05:43:25 np0005475493 podman[74692]: 2025-10-08 09:43:25.185099868 +0000 UTC m=+0.482739938 container died 39b3822a074ca99a8415127ebae35721aaa00bae342a460076d7baf04ff4a288 (image=quay.io/ceph/ceph:v19, name=musing_shockley, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  8 05:43:25 np0005475493 systemd[1]: var-lib-containers-storage-overlay-e4b6d830759cb0ccf205899dc1eb794fe516099495807c69f92beacfec569c58-merged.mount: Deactivated successfully.
Oct  8 05:43:25 np0005475493 podman[74692]: 2025-10-08 09:43:25.228004548 +0000 UTC m=+0.525644638 container remove 39b3822a074ca99a8415127ebae35721aaa00bae342a460076d7baf04ff4a288 (image=quay.io/ceph/ceph:v19, name=musing_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:43:25 np0005475493 systemd[1]: libpod-conmon-39b3822a074ca99a8415127ebae35721aaa00bae342a460076d7baf04ff4a288.scope: Deactivated successfully.
Oct  8 05:43:25 np0005475493 podman[74746]: 2025-10-08 09:43:25.306501514 +0000 UTC m=+0.052775436 container create 1d672449a8998ab58a655a649b23fbf7aec3c77880c096a77f45cbaeda712131 (image=quay.io/ceph/ceph:v19, name=dreamy_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:43:25 np0005475493 systemd[1]: Started libpod-conmon-1d672449a8998ab58a655a649b23fbf7aec3c77880c096a77f45cbaeda712131.scope.
Oct  8 05:43:25 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:25 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b10c6081f803c6b56fe8cd5028b526dfdc69e9929049d209c0f7fd8d9f7d60/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:25 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b10c6081f803c6b56fe8cd5028b526dfdc69e9929049d209c0f7fd8d9f7d60/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:25 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b10c6081f803c6b56fe8cd5028b526dfdc69e9929049d209c0f7fd8d9f7d60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:25 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b10c6081f803c6b56fe8cd5028b526dfdc69e9929049d209c0f7fd8d9f7d60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:25 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b10c6081f803c6b56fe8cd5028b526dfdc69e9929049d209c0f7fd8d9f7d60/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:25 np0005475493 podman[74746]: 2025-10-08 09:43:25.287953492 +0000 UTC m=+0.034227424 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:25 np0005475493 podman[74746]: 2025-10-08 09:43:25.396013423 +0000 UTC m=+0.142287355 container init 1d672449a8998ab58a655a649b23fbf7aec3c77880c096a77f45cbaeda712131 (image=quay.io/ceph/ceph:v19, name=dreamy_poitras, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:43:25 np0005475493 podman[74746]: 2025-10-08 09:43:25.406620853 +0000 UTC m=+0.152894765 container start 1d672449a8998ab58a655a649b23fbf7aec3c77880c096a77f45cbaeda712131 (image=quay.io/ceph/ceph:v19, name=dreamy_poitras, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  8 05:43:25 np0005475493 podman[74746]: 2025-10-08 09:43:25.411763507 +0000 UTC m=+0.158037419 container attach 1d672449a8998ab58a655a649b23fbf7aec3c77880c096a77f45cbaeda712131 (image=quay.io/ceph/ceph:v19, name=dreamy_poitras, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:43:25 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 05:43:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Oct  8 05:43:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:25 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Set ssh ssh_identity_pub
Oct  8 05:43:25 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Oct  8 05:43:25 np0005475493 systemd[1]: libpod-1d672449a8998ab58a655a649b23fbf7aec3c77880c096a77f45cbaeda712131.scope: Deactivated successfully.
Oct  8 05:43:25 np0005475493 podman[74746]: 2025-10-08 09:43:25.769446299 +0000 UTC m=+0.515720231 container died 1d672449a8998ab58a655a649b23fbf7aec3c77880c096a77f45cbaeda712131 (image=quay.io/ceph/ceph:v19, name=dreamy_poitras, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:43:25 np0005475493 systemd[1]: var-lib-containers-storage-overlay-74b10c6081f803c6b56fe8cd5028b526dfdc69e9929049d209c0f7fd8d9f7d60-merged.mount: Deactivated successfully.
Oct  8 05:43:25 np0005475493 podman[74746]: 2025-10-08 09:43:25.813683142 +0000 UTC m=+0.559957094 container remove 1d672449a8998ab58a655a649b23fbf7aec3c77880c096a77f45cbaeda712131 (image=quay.io/ceph/ceph:v19, name=dreamy_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:43:25 np0005475493 systemd[1]: libpod-conmon-1d672449a8998ab58a655a649b23fbf7aec3c77880c096a77f45cbaeda712131.scope: Deactivated successfully.
Oct  8 05:43:25 np0005475493 podman[74798]: 2025-10-08 09:43:25.89413233 +0000 UTC m=+0.053832409 container create c8d7c87414edf21b6855c874fc33f356de8a06d32455f87c980ae9da39b69606 (image=quay.io/ceph/ceph:v19, name=gracious_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:43:25 np0005475493 systemd[1]: Started libpod-conmon-c8d7c87414edf21b6855c874fc33f356de8a06d32455f87c980ae9da39b69606.scope.
Oct  8 05:43:25 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:25 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0867299247c15c0c59d22cc9c1b69173479677c1a80a9a7eca732a5af663a66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:25 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0867299247c15c0c59d22cc9c1b69173479677c1a80a9a7eca732a5af663a66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:25 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0867299247c15c0c59d22cc9c1b69173479677c1a80a9a7eca732a5af663a66/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:25 np0005475493 podman[74798]: 2025-10-08 09:43:25.877118897 +0000 UTC m=+0.036818986 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:25 np0005475493 podman[74798]: 2025-10-08 09:43:25.995382224 +0000 UTC m=+0.155082313 container init c8d7c87414edf21b6855c874fc33f356de8a06d32455f87c980ae9da39b69606 (image=quay.io/ceph/ceph:v19, name=gracious_ganguly, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  8 05:43:26 np0005475493 podman[74798]: 2025-10-08 09:43:26.004600499 +0000 UTC m=+0.164300598 container start c8d7c87414edf21b6855c874fc33f356de8a06d32455f87c980ae9da39b69606 (image=quay.io/ceph/ceph:v19, name=gracious_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:43:26 np0005475493 podman[74798]: 2025-10-08 09:43:26.008149762 +0000 UTC m=+0.167849851 container attach c8d7c87414edf21b6855c874fc33f356de8a06d32455f87c980ae9da39b69606 (image=quay.io/ceph/ceph:v19, name=gracious_ganguly, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct  8 05:43:26 np0005475493 ceph-mon[73572]: [08/Oct/2025:09:43:24] ENGINE Serving on http://192.168.122.100:8765
Oct  8 05:43:26 np0005475493 ceph-mon[73572]: Set ssh ssh_user
Oct  8 05:43:26 np0005475493 ceph-mon[73572]: Set ssh ssh_config
Oct  8 05:43:26 np0005475493 ceph-mon[73572]: ssh user set to ceph-admin. sudo will be used
Oct  8 05:43:26 np0005475493 ceph-mon[73572]: [08/Oct/2025:09:43:24] ENGINE Serving on https://192.168.122.100:7150
Oct  8 05:43:26 np0005475493 ceph-mon[73572]: [08/Oct/2025:09:43:24] ENGINE Bus STARTED
Oct  8 05:43:26 np0005475493 ceph-mon[73572]: [08/Oct/2025:09:43:24] ENGINE Client ('192.168.122.100', 46604) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  8 05:43:26 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:26 np0005475493 ceph-mon[73572]: Set ssh ssh_identity_key
Oct  8 05:43:26 np0005475493 ceph-mon[73572]: Set ssh private key
Oct  8 05:43:26 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:26 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 05:43:26 np0005475493 gracious_ganguly[74814]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMUetrKYz2yzUqXQdz0GMEc7nZWQFiWernMvGrA8oCSXKUWp6oF4zAIrVF9kp7fG8GVxs6O5yNHgIYsMs9v39LHMe/VQPYXxcVu6/8aDnAS2wzSlH1kfOrpdntAo+JesC34iTzRriGvjARpVqmkBrz6RB9QZX8SnrBdZst0W4m1X8OD+O6DYEBMJxWtgiIPmMnOubMs+k1f8ONJcYKxq3HscWukNjnCKBsiyvX3kwhdV590HAFLDaMvqxoan4CH48GeLqNYj86NBeSsJuWftk0wYOtBlTJMmOE4EDYzliyGb+KuHgFYT5qijo1SvM4ayDYzPY3kP0UsGfsLje0plcbILyKEBHHUs1Xf6XfnOnvpCpN6uEr24OyPbe53iYjL/C0ZAjRuU+unEK4t4SmRsyU4cZqe6i+RdjvwcTF8fasBcSM02BpcHbJfWZCp/smBkJdsq3XnVWRBu4mJUByoSrPl3DVwH3GUayVW16yOYMiqo8gro2cCnDPwCmwjrmEzqM= zuul@controller
Oct  8 05:43:26 np0005475493 systemd[1]: libpod-c8d7c87414edf21b6855c874fc33f356de8a06d32455f87c980ae9da39b69606.scope: Deactivated successfully.
Oct  8 05:43:26 np0005475493 podman[74798]: 2025-10-08 09:43:26.4113937 +0000 UTC m=+0.571093769 container died c8d7c87414edf21b6855c874fc33f356de8a06d32455f87c980ae9da39b69606 (image=quay.io/ceph/ceph:v19, name=gracious_ganguly, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:43:26 np0005475493 systemd[1]: var-lib-containers-storage-overlay-a0867299247c15c0c59d22cc9c1b69173479677c1a80a9a7eca732a5af663a66-merged.mount: Deactivated successfully.
Oct  8 05:43:26 np0005475493 podman[74798]: 2025-10-08 09:43:26.449649511 +0000 UTC m=+0.609349590 container remove c8d7c87414edf21b6855c874fc33f356de8a06d32455f87c980ae9da39b69606 (image=quay.io/ceph/ceph:v19, name=gracious_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:43:26 np0005475493 systemd[1]: libpod-conmon-c8d7c87414edf21b6855c874fc33f356de8a06d32455f87c980ae9da39b69606.scope: Deactivated successfully.
Oct  8 05:43:26 np0005475493 podman[74851]: 2025-10-08 09:43:26.509964027 +0000 UTC m=+0.040967419 container create 35c5c5d406dfd163b1952dc0a052cce4ab75825e6b02629fde695ef31197fae3 (image=quay.io/ceph/ceph:v19, name=suspicious_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:43:26 np0005475493 ceph-mgr[73869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  8 05:43:26 np0005475493 systemd[1]: Started libpod-conmon-35c5c5d406dfd163b1952dc0a052cce4ab75825e6b02629fde695ef31197fae3.scope.
Oct  8 05:43:26 np0005475493 podman[74851]: 2025-10-08 09:43:26.489281647 +0000 UTC m=+0.020285069 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:26 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:26 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fef55c080c77112ea1cb56957cce303d6477f1d9cc5387cf26dc4036e7c62b5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:26 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fef55c080c77112ea1cb56957cce303d6477f1d9cc5387cf26dc4036e7c62b5b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:26 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fef55c080c77112ea1cb56957cce303d6477f1d9cc5387cf26dc4036e7c62b5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:26 np0005475493 podman[74851]: 2025-10-08 09:43:26.607565055 +0000 UTC m=+0.138568447 container init 35c5c5d406dfd163b1952dc0a052cce4ab75825e6b02629fde695ef31197fae3 (image=quay.io/ceph/ceph:v19, name=suspicious_wilbur, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:43:26 np0005475493 podman[74851]: 2025-10-08 09:43:26.624118423 +0000 UTC m=+0.155121825 container start 35c5c5d406dfd163b1952dc0a052cce4ab75825e6b02629fde695ef31197fae3 (image=quay.io/ceph/ceph:v19, name=suspicious_wilbur, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:43:26 np0005475493 podman[74851]: 2025-10-08 09:43:26.627539942 +0000 UTC m=+0.158543374 container attach 35c5c5d406dfd163b1952dc0a052cce4ab75825e6b02629fde695ef31197fae3 (image=quay.io/ceph/ceph:v19, name=suspicious_wilbur, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  8 05:43:27 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 05:43:27 np0005475493 ceph-mon[73572]: Set ssh ssh_identity_pub
Oct  8 05:43:27 np0005475493 systemd[1]: Created slice User Slice of UID 42477.
Oct  8 05:43:27 np0005475493 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct  8 05:43:27 np0005475493 systemd-logind[798]: New session 22 of user ceph-admin.
Oct  8 05:43:27 np0005475493 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct  8 05:43:27 np0005475493 systemd[1]: Starting User Manager for UID 42477...
Oct  8 05:43:27 np0005475493 systemd[74898]: Queued start job for default target Main User Target.
Oct  8 05:43:27 np0005475493 systemd[74898]: Created slice User Application Slice.
Oct  8 05:43:27 np0005475493 systemd[74898]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  8 05:43:27 np0005475493 systemd[74898]: Started Daily Cleanup of User's Temporary Directories.
Oct  8 05:43:27 np0005475493 systemd[74898]: Reached target Paths.
Oct  8 05:43:27 np0005475493 systemd[74898]: Reached target Timers.
Oct  8 05:43:27 np0005475493 systemd[74898]: Starting D-Bus User Message Bus Socket...
Oct  8 05:43:27 np0005475493 systemd[74898]: Starting Create User's Volatile Files and Directories...
Oct  8 05:43:27 np0005475493 systemd[74898]: Finished Create User's Volatile Files and Directories.
Oct  8 05:43:27 np0005475493 systemd[74898]: Listening on D-Bus User Message Bus Socket.
Oct  8 05:43:27 np0005475493 systemd[74898]: Reached target Sockets.
Oct  8 05:43:27 np0005475493 systemd[74898]: Reached target Basic System.
Oct  8 05:43:27 np0005475493 systemd[74898]: Reached target Main User Target.
Oct  8 05:43:27 np0005475493 systemd[74898]: Startup finished in 138ms.
Oct  8 05:43:27 np0005475493 systemd-logind[798]: New session 24 of user ceph-admin.
Oct  8 05:43:27 np0005475493 systemd[1]: Started User Manager for UID 42477.
Oct  8 05:43:27 np0005475493 systemd[1]: Started Session 22 of User ceph-admin.
Oct  8 05:43:27 np0005475493 systemd[1]: Started Session 24 of User ceph-admin.
Oct  8 05:43:27 np0005475493 systemd-logind[798]: New session 25 of user ceph-admin.
Oct  8 05:43:27 np0005475493 systemd[1]: Started Session 25 of User ceph-admin.
Oct  8 05:43:28 np0005475493 systemd-logind[798]: New session 26 of user ceph-admin.
Oct  8 05:43:28 np0005475493 systemd[1]: Started Session 26 of User ceph-admin.
Oct  8 05:43:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053155 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:43:28 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Oct  8 05:43:28 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Oct  8 05:43:28 np0005475493 ceph-mgr[73869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  8 05:43:28 np0005475493 systemd-logind[798]: New session 27 of user ceph-admin.
Oct  8 05:43:28 np0005475493 systemd[1]: Started Session 27 of User ceph-admin.
Oct  8 05:43:28 np0005475493 systemd-logind[798]: New session 28 of user ceph-admin.
Oct  8 05:43:28 np0005475493 systemd[1]: Started Session 28 of User ceph-admin.
Oct  8 05:43:29 np0005475493 ceph-mon[73572]: Deploying cephadm binary to compute-0
Oct  8 05:43:29 np0005475493 systemd-logind[798]: New session 29 of user ceph-admin.
Oct  8 05:43:29 np0005475493 systemd[1]: Started Session 29 of User ceph-admin.
Oct  8 05:43:29 np0005475493 systemd-logind[798]: New session 30 of user ceph-admin.
Oct  8 05:43:29 np0005475493 systemd[1]: Started Session 30 of User ceph-admin.
Oct  8 05:43:29 np0005475493 systemd-logind[798]: New session 31 of user ceph-admin.
Oct  8 05:43:29 np0005475493 systemd[1]: Started Session 31 of User ceph-admin.
Oct  8 05:43:30 np0005475493 systemd-logind[798]: New session 32 of user ceph-admin.
Oct  8 05:43:30 np0005475493 systemd[1]: Started Session 32 of User ceph-admin.
Oct  8 05:43:30 np0005475493 ceph-mgr[73869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  8 05:43:31 np0005475493 systemd-logind[798]: New session 33 of user ceph-admin.
Oct  8 05:43:31 np0005475493 systemd[1]: Started Session 33 of User ceph-admin.
Oct  8 05:43:31 np0005475493 systemd-logind[798]: New session 34 of user ceph-admin.
Oct  8 05:43:31 np0005475493 systemd[1]: Started Session 34 of User ceph-admin.
Oct  8 05:43:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct  8 05:43:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:32 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Added host compute-0
Oct  8 05:43:32 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Added host compute-0
Oct  8 05:43:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct  8 05:43:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  8 05:43:32 np0005475493 suspicious_wilbur[74868]: Added host 'compute-0' with addr '192.168.122.100'
Oct  8 05:43:32 np0005475493 systemd[1]: libpod-35c5c5d406dfd163b1952dc0a052cce4ab75825e6b02629fde695ef31197fae3.scope: Deactivated successfully.
Oct  8 05:43:32 np0005475493 podman[74851]: 2025-10-08 09:43:32.203429459 +0000 UTC m=+5.734432851 container died 35c5c5d406dfd163b1952dc0a052cce4ab75825e6b02629fde695ef31197fae3 (image=quay.io/ceph/ceph:v19, name=suspicious_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  8 05:43:32 np0005475493 systemd[1]: var-lib-containers-storage-overlay-fef55c080c77112ea1cb56957cce303d6477f1d9cc5387cf26dc4036e7c62b5b-merged.mount: Deactivated successfully.
Oct  8 05:43:32 np0005475493 podman[74851]: 2025-10-08 09:43:32.253197538 +0000 UTC m=+5.784200940 container remove 35c5c5d406dfd163b1952dc0a052cce4ab75825e6b02629fde695ef31197fae3 (image=quay.io/ceph/ceph:v19, name=suspicious_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:32 np0005475493 systemd[1]: libpod-conmon-35c5c5d406dfd163b1952dc0a052cce4ab75825e6b02629fde695ef31197fae3.scope: Deactivated successfully.
Oct  8 05:43:32 np0005475493 podman[75288]: 2025-10-08 09:43:32.327334476 +0000 UTC m=+0.043966365 container create a7025fd0221c295425e09a4c3f8f9ac3d3150c8d7421a1781fe8386baf2abbdb (image=quay.io/ceph/ceph:v19, name=gifted_keldysh, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:43:32 np0005475493 systemd[1]: Started libpod-conmon-a7025fd0221c295425e09a4c3f8f9ac3d3150c8d7421a1781fe8386baf2abbdb.scope.
Oct  8 05:43:32 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:32 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99606dc3eec62013e8eb6f6b86e5f3d1a82885ac6c795aaf02c84982184d7e8d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:32 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99606dc3eec62013e8eb6f6b86e5f3d1a82885ac6c795aaf02c84982184d7e8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:32 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99606dc3eec62013e8eb6f6b86e5f3d1a82885ac6c795aaf02c84982184d7e8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:32 np0005475493 podman[75288]: 2025-10-08 09:43:32.399599694 +0000 UTC m=+0.116231613 container init a7025fd0221c295425e09a4c3f8f9ac3d3150c8d7421a1781fe8386baf2abbdb (image=quay.io/ceph/ceph:v19, name=gifted_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  8 05:43:32 np0005475493 podman[75288]: 2025-10-08 09:43:32.308087091 +0000 UTC m=+0.024719040 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:32 np0005475493 podman[75288]: 2025-10-08 09:43:32.40793836 +0000 UTC m=+0.124570249 container start a7025fd0221c295425e09a4c3f8f9ac3d3150c8d7421a1781fe8386baf2abbdb (image=quay.io/ceph/ceph:v19, name=gifted_keldysh, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:43:32 np0005475493 podman[75288]: 2025-10-08 09:43:32.411297177 +0000 UTC m=+0.127929086 container attach a7025fd0221c295425e09a4c3f8f9ac3d3150c8d7421a1781fe8386baf2abbdb (image=quay.io/ceph/ceph:v19, name=gifted_keldysh, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:43:32 np0005475493 ceph-mgr[73869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  8 05:43:32 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 05:43:32 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Saving service mon spec with placement count:5
Oct  8 05:43:32 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Oct  8 05:43:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct  8 05:43:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:32 np0005475493 gifted_keldysh[75330]: Scheduled mon update...
Oct  8 05:43:32 np0005475493 systemd[1]: libpod-a7025fd0221c295425e09a4c3f8f9ac3d3150c8d7421a1781fe8386baf2abbdb.scope: Deactivated successfully.
Oct  8 05:43:32 np0005475493 podman[75288]: 2025-10-08 09:43:32.850978328 +0000 UTC m=+0.567610227 container died a7025fd0221c295425e09a4c3f8f9ac3d3150c8d7421a1781fe8386baf2abbdb (image=quay.io/ceph/ceph:v19, name=gifted_keldysh, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  8 05:43:32 np0005475493 systemd[1]: var-lib-containers-storage-overlay-99606dc3eec62013e8eb6f6b86e5f3d1a82885ac6c795aaf02c84982184d7e8d-merged.mount: Deactivated successfully.
Oct  8 05:43:32 np0005475493 podman[75288]: 2025-10-08 09:43:32.892451743 +0000 UTC m=+0.609083632 container remove a7025fd0221c295425e09a4c3f8f9ac3d3150c8d7421a1781fe8386baf2abbdb (image=quay.io/ceph/ceph:v19, name=gifted_keldysh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:32 np0005475493 systemd[1]: libpod-conmon-a7025fd0221c295425e09a4c3f8f9ac3d3150c8d7421a1781fe8386baf2abbdb.scope: Deactivated successfully.
Oct  8 05:43:32 np0005475493 podman[75392]: 2025-10-08 09:43:32.976492477 +0000 UTC m=+0.057849579 container create d9c41c7da8290ff74914df1169a4d11136ab1f1d5e39883fb5bd0a0cb09f0798 (image=quay.io/ceph/ceph:v19, name=confident_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:43:33 np0005475493 systemd[1]: Started libpod-conmon-d9c41c7da8290ff74914df1169a4d11136ab1f1d5e39883fb5bd0a0cb09f0798.scope.
Oct  8 05:43:33 np0005475493 podman[75392]: 2025-10-08 09:43:32.948818553 +0000 UTC m=+0.030175745 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:33 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:33 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dd8e8c4baf10b690e1c1396698681a0665d0bd8f1173c6d4bbd9cb097083fea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:33 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dd8e8c4baf10b690e1c1396698681a0665d0bd8f1173c6d4bbd9cb097083fea/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:33 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dd8e8c4baf10b690e1c1396698681a0665d0bd8f1173c6d4bbd9cb097083fea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:33 np0005475493 podman[75392]: 2025-10-08 09:43:33.087326266 +0000 UTC m=+0.168683358 container init d9c41c7da8290ff74914df1169a4d11136ab1f1d5e39883fb5bd0a0cb09f0798 (image=quay.io/ceph/ceph:v19, name=confident_morse, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:43:33 np0005475493 podman[75392]: 2025-10-08 09:43:33.101879631 +0000 UTC m=+0.183236753 container start d9c41c7da8290ff74914df1169a4d11136ab1f1d5e39883fb5bd0a0cb09f0798 (image=quay.io/ceph/ceph:v19, name=confident_morse, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct  8 05:43:33 np0005475493 podman[75392]: 2025-10-08 09:43:33.10561242 +0000 UTC m=+0.186969522 container attach d9c41c7da8290ff74914df1169a4d11136ab1f1d5e39883fb5bd0a0cb09f0798 (image=quay.io/ceph/ceph:v19, name=confident_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:43:33 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:33 np0005475493 ceph-mon[73572]: Added host compute-0
Oct  8 05:43:33 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:33 np0005475493 podman[75365]: 2025-10-08 09:43:33.234642241 +0000 UTC m=+0.657486448 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054712 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:43:33 np0005475493 podman[75446]: 2025-10-08 09:43:33.359043974 +0000 UTC m=+0.047441446 container create d39875fd4bd1bb67b5276a9a3125460f2d829854047b66618d1d93fc78f5faf7 (image=quay.io/ceph/ceph:v19, name=sharp_euclid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:43:33 np0005475493 systemd[1]: Started libpod-conmon-d39875fd4bd1bb67b5276a9a3125460f2d829854047b66618d1d93fc78f5faf7.scope.
Oct  8 05:43:33 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:33 np0005475493 podman[75446]: 2025-10-08 09:43:33.418458541 +0000 UTC m=+0.106856023 container init d39875fd4bd1bb67b5276a9a3125460f2d829854047b66618d1d93fc78f5faf7 (image=quay.io/ceph/ceph:v19, name=sharp_euclid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:43:33 np0005475493 podman[75446]: 2025-10-08 09:43:33.42532295 +0000 UTC m=+0.113720412 container start d39875fd4bd1bb67b5276a9a3125460f2d829854047b66618d1d93fc78f5faf7 (image=quay.io/ceph/ceph:v19, name=sharp_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  8 05:43:33 np0005475493 podman[75446]: 2025-10-08 09:43:33.427965444 +0000 UTC m=+0.116362926 container attach d39875fd4bd1bb67b5276a9a3125460f2d829854047b66618d1d93fc78f5faf7 (image=quay.io/ceph/ceph:v19, name=sharp_euclid, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:43:33 np0005475493 podman[75446]: 2025-10-08 09:43:33.333750096 +0000 UTC m=+0.022147638 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:33 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 05:43:33 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Saving service mgr spec with placement count:2
Oct  8 05:43:33 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Oct  8 05:43:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct  8 05:43:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:33 np0005475493 confident_morse[75408]: Scheduled mgr update...
Oct  8 05:43:33 np0005475493 podman[75392]: 2025-10-08 09:43:33.512839425 +0000 UTC m=+0.594196527 container died d9c41c7da8290ff74914df1169a4d11136ab1f1d5e39883fb5bd0a0cb09f0798 (image=quay.io/ceph/ceph:v19, name=confident_morse, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  8 05:43:33 np0005475493 systemd[1]: libpod-d9c41c7da8290ff74914df1169a4d11136ab1f1d5e39883fb5bd0a0cb09f0798.scope: Deactivated successfully.
Oct  8 05:43:33 np0005475493 sharp_euclid[75463]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Oct  8 05:43:33 np0005475493 systemd[1]: libpod-d39875fd4bd1bb67b5276a9a3125460f2d829854047b66618d1d93fc78f5faf7.scope: Deactivated successfully.
Oct  8 05:43:33 np0005475493 podman[75446]: 2025-10-08 09:43:33.527819924 +0000 UTC m=+0.216217416 container died d39875fd4bd1bb67b5276a9a3125460f2d829854047b66618d1d93fc78f5faf7 (image=quay.io/ceph/ceph:v19, name=sharp_euclid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  8 05:43:33 np0005475493 systemd[1]: var-lib-containers-storage-overlay-0dd8e8c4baf10b690e1c1396698681a0665d0bd8f1173c6d4bbd9cb097083fea-merged.mount: Deactivated successfully.
Oct  8 05:43:33 np0005475493 podman[75392]: 2025-10-08 09:43:33.555460627 +0000 UTC m=+0.636817729 container remove d9c41c7da8290ff74914df1169a4d11136ab1f1d5e39883fb5bd0a0cb09f0798 (image=quay.io/ceph/ceph:v19, name=confident_morse, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:43:33 np0005475493 systemd[1]: var-lib-containers-storage-overlay-5d8e68a1821da843b707249232c6675479a71da5d3ffb2a8a6b5ef4bb2229bda-merged.mount: Deactivated successfully.
Oct  8 05:43:33 np0005475493 systemd[1]: libpod-conmon-d9c41c7da8290ff74914df1169a4d11136ab1f1d5e39883fb5bd0a0cb09f0798.scope: Deactivated successfully.
Oct  8 05:43:33 np0005475493 podman[75446]: 2025-10-08 09:43:33.584995129 +0000 UTC m=+0.273392611 container remove d39875fd4bd1bb67b5276a9a3125460f2d829854047b66618d1d93fc78f5faf7 (image=quay.io/ceph/ceph:v19, name=sharp_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct  8 05:43:33 np0005475493 systemd[1]: libpod-conmon-d39875fd4bd1bb67b5276a9a3125460f2d829854047b66618d1d93fc78f5faf7.scope: Deactivated successfully.
Oct  8 05:43:33 np0005475493 podman[75489]: 2025-10-08 09:43:33.608864892 +0000 UTC m=+0.035401242 container create 041d67c20ac76725947436d95b811b26184c242c7b80eb4514349e12dfc1d066 (image=quay.io/ceph/ceph:v19, name=funny_ganguly, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:43:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Oct  8 05:43:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:33 np0005475493 systemd[1]: Started libpod-conmon-041d67c20ac76725947436d95b811b26184c242c7b80eb4514349e12dfc1d066.scope.
Oct  8 05:43:33 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:33 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb98cd2ebf64193ff239b1b310d8bf68197256ab91b27ee7743e3eb68a49a13f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:33 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb98cd2ebf64193ff239b1b310d8bf68197256ab91b27ee7743e3eb68a49a13f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:33 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb98cd2ebf64193ff239b1b310d8bf68197256ab91b27ee7743e3eb68a49a13f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:33 np0005475493 podman[75489]: 2025-10-08 09:43:33.593604374 +0000 UTC m=+0.020140754 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:33 np0005475493 podman[75489]: 2025-10-08 09:43:33.698227335 +0000 UTC m=+0.124763705 container init 041d67c20ac76725947436d95b811b26184c242c7b80eb4514349e12dfc1d066 (image=quay.io/ceph/ceph:v19, name=funny_ganguly, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  8 05:43:33 np0005475493 podman[75489]: 2025-10-08 09:43:33.712813072 +0000 UTC m=+0.139349422 container start 041d67c20ac76725947436d95b811b26184c242c7b80eb4514349e12dfc1d066 (image=quay.io/ceph/ceph:v19, name=funny_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  8 05:43:33 np0005475493 podman[75489]: 2025-10-08 09:43:33.716192159 +0000 UTC m=+0.142728529 container attach 041d67c20ac76725947436d95b811b26184c242c7b80eb4514349e12dfc1d066 (image=quay.io/ceph/ceph:v19, name=funny_ganguly, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:43:34 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 05:43:34 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Saving service crash spec with placement *
Oct  8 05:43:34 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Oct  8 05:43:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct  8 05:43:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:34 np0005475493 funny_ganguly[75506]: Scheduled crash update...
Oct  8 05:43:34 np0005475493 systemd[1]: libpod-041d67c20ac76725947436d95b811b26184c242c7b80eb4514349e12dfc1d066.scope: Deactivated successfully.
Oct  8 05:43:34 np0005475493 podman[75489]: 2025-10-08 09:43:34.062485759 +0000 UTC m=+0.489022109 container died 041d67c20ac76725947436d95b811b26184c242c7b80eb4514349e12dfc1d066 (image=quay.io/ceph/ceph:v19, name=funny_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct  8 05:43:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:43:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:34 np0005475493 systemd[1]: var-lib-containers-storage-overlay-fb98cd2ebf64193ff239b1b310d8bf68197256ab91b27ee7743e3eb68a49a13f-merged.mount: Deactivated successfully.
Oct  8 05:43:34 np0005475493 podman[75489]: 2025-10-08 09:43:34.101266587 +0000 UTC m=+0.527802937 container remove 041d67c20ac76725947436d95b811b26184c242c7b80eb4514349e12dfc1d066 (image=quay.io/ceph/ceph:v19, name=funny_ganguly, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:43:34 np0005475493 systemd[1]: libpod-conmon-041d67c20ac76725947436d95b811b26184c242c7b80eb4514349e12dfc1d066.scope: Deactivated successfully.
Oct  8 05:43:34 np0005475493 podman[75637]: 2025-10-08 09:43:34.163297048 +0000 UTC m=+0.045609298 container create 3d8fb84c356bc9b02f73c3d40e762c6eea3150a403c309a8780157383acb1e84 (image=quay.io/ceph/ceph:v19, name=bold_ramanujan, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  8 05:43:34 np0005475493 ceph-mon[73572]: Saving service mon spec with placement count:5
Oct  8 05:43:34 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:34 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:34 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:34 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:34 np0005475493 systemd[1]: Started libpod-conmon-3d8fb84c356bc9b02f73c3d40e762c6eea3150a403c309a8780157383acb1e84.scope.
Oct  8 05:43:34 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:34 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb57213f3de74c89187c7e8ac1cfd3331e64f4a3b4d7a520a70056aa8f13c7a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:34 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb57213f3de74c89187c7e8ac1cfd3331e64f4a3b4d7a520a70056aa8f13c7a6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:34 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb57213f3de74c89187c7e8ac1cfd3331e64f4a3b4d7a520a70056aa8f13c7a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:34 np0005475493 podman[75637]: 2025-10-08 09:43:34.228974485 +0000 UTC m=+0.111286745 container init 3d8fb84c356bc9b02f73c3d40e762c6eea3150a403c309a8780157383acb1e84 (image=quay.io/ceph/ceph:v19, name=bold_ramanujan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:43:34 np0005475493 podman[75637]: 2025-10-08 09:43:34.233364116 +0000 UTC m=+0.115676366 container start 3d8fb84c356bc9b02f73c3d40e762c6eea3150a403c309a8780157383acb1e84 (image=quay.io/ceph/ceph:v19, name=bold_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  8 05:43:34 np0005475493 podman[75637]: 2025-10-08 09:43:34.136852733 +0000 UTC m=+0.019165013 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:34 np0005475493 podman[75637]: 2025-10-08 09:43:34.235963599 +0000 UTC m=+0.118275869 container attach 3d8fb84c356bc9b02f73c3d40e762c6eea3150a403c309a8780157383acb1e84 (image=quay.io/ceph/ceph:v19, name=bold_ramanujan, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  8 05:43:34 np0005475493 ceph-mgr[73869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  8 05:43:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Oct  8 05:43:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2569311365' entity='client.admin' 
Oct  8 05:43:34 np0005475493 systemd[1]: libpod-3d8fb84c356bc9b02f73c3d40e762c6eea3150a403c309a8780157383acb1e84.scope: Deactivated successfully.
Oct  8 05:43:34 np0005475493 podman[75637]: 2025-10-08 09:43:34.583223208 +0000 UTC m=+0.465535518 container died 3d8fb84c356bc9b02f73c3d40e762c6eea3150a403c309a8780157383acb1e84 (image=quay.io/ceph/ceph:v19, name=bold_ramanujan, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:43:34 np0005475493 systemd[1]: var-lib-containers-storage-overlay-eb57213f3de74c89187c7e8ac1cfd3331e64f4a3b4d7a520a70056aa8f13c7a6-merged.mount: Deactivated successfully.
Oct  8 05:43:34 np0005475493 podman[75637]: 2025-10-08 09:43:34.644974971 +0000 UTC m=+0.527287261 container remove 3d8fb84c356bc9b02f73c3d40e762c6eea3150a403c309a8780157383acb1e84 (image=quay.io/ceph/ceph:v19, name=bold_ramanujan, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:34 np0005475493 systemd[1]: libpod-conmon-3d8fb84c356bc9b02f73c3d40e762c6eea3150a403c309a8780157383acb1e84.scope: Deactivated successfully.
Oct  8 05:43:34 np0005475493 podman[75777]: 2025-10-08 09:43:34.720767721 +0000 UTC m=+0.099799979 container exec 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  8 05:43:34 np0005475493 podman[75800]: 2025-10-08 09:43:34.734403576 +0000 UTC m=+0.060197264 container create e8e3fc0dceee083e5cafa18e1c67337eacc59ee698584fd2911b53e2e26e0b9d (image=quay.io/ceph/ceph:v19, name=quirky_proskuriakova, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:43:34 np0005475493 systemd[1]: Started libpod-conmon-e8e3fc0dceee083e5cafa18e1c67337eacc59ee698584fd2911b53e2e26e0b9d.scope.
Oct  8 05:43:34 np0005475493 podman[75800]: 2025-10-08 09:43:34.708291152 +0000 UTC m=+0.034084860 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:34 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:34 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f7fbf2a34d80000fb3c491a25e9767b2ca5f667e5e701c94e07dc3620bd76dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:34 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f7fbf2a34d80000fb3c491a25e9767b2ca5f667e5e701c94e07dc3620bd76dc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:34 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f7fbf2a34d80000fb3c491a25e9767b2ca5f667e5e701c94e07dc3620bd76dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:34 np0005475493 podman[75800]: 2025-10-08 09:43:34.832653564 +0000 UTC m=+0.158447282 container init e8e3fc0dceee083e5cafa18e1c67337eacc59ee698584fd2911b53e2e26e0b9d (image=quay.io/ceph/ceph:v19, name=quirky_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:43:34 np0005475493 podman[75800]: 2025-10-08 09:43:34.84287266 +0000 UTC m=+0.168666348 container start e8e3fc0dceee083e5cafa18e1c67337eacc59ee698584fd2911b53e2e26e0b9d (image=quay.io/ceph/ceph:v19, name=quirky_proskuriakova, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct  8 05:43:34 np0005475493 podman[75800]: 2025-10-08 09:43:34.846557588 +0000 UTC m=+0.172351276 container attach e8e3fc0dceee083e5cafa18e1c67337eacc59ee698584fd2911b53e2e26e0b9d (image=quay.io/ceph/ceph:v19, name=quirky_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  8 05:43:34 np0005475493 podman[75777]: 2025-10-08 09:43:34.84943482 +0000 UTC m=+0.228466988 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  8 05:43:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:43:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:35 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 05:43:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Oct  8 05:43:35 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:35 np0005475493 ceph-mon[73572]: Saving service mgr spec with placement count:2
Oct  8 05:43:35 np0005475493 ceph-mon[73572]: Saving service crash spec with placement *
Oct  8 05:43:35 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/2569311365' entity='client.admin' 
Oct  8 05:43:35 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:35 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:35 np0005475493 systemd[1]: libpod-e8e3fc0dceee083e5cafa18e1c67337eacc59ee698584fd2911b53e2e26e0b9d.scope: Deactivated successfully.
Oct  8 05:43:35 np0005475493 podman[75800]: 2025-10-08 09:43:35.199180509 +0000 UTC m=+0.524974197 container died e8e3fc0dceee083e5cafa18e1c67337eacc59ee698584fd2911b53e2e26e0b9d (image=quay.io/ceph/ceph:v19, name=quirky_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:43:35 np0005475493 systemd[1]: var-lib-containers-storage-overlay-7f7fbf2a34d80000fb3c491a25e9767b2ca5f667e5e701c94e07dc3620bd76dc-merged.mount: Deactivated successfully.
Oct  8 05:43:35 np0005475493 podman[75800]: 2025-10-08 09:43:35.242488362 +0000 UTC m=+0.568282060 container remove e8e3fc0dceee083e5cafa18e1c67337eacc59ee698584fd2911b53e2e26e0b9d (image=quay.io/ceph/ceph:v19, name=quirky_proskuriakova, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  8 05:43:35 np0005475493 systemd[1]: libpod-conmon-e8e3fc0dceee083e5cafa18e1c67337eacc59ee698584fd2911b53e2e26e0b9d.scope: Deactivated successfully.
Oct  8 05:43:35 np0005475493 podman[75946]: 2025-10-08 09:43:35.317347262 +0000 UTC m=+0.052448686 container create 9b4eac011805acaafc5ad70e34e9ad2a5162e05957012a201bfcfd4e66d44923 (image=quay.io/ceph/ceph:v19, name=vigilant_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Oct  8 05:43:35 np0005475493 systemd[1]: Started libpod-conmon-9b4eac011805acaafc5ad70e34e9ad2a5162e05957012a201bfcfd4e66d44923.scope.
Oct  8 05:43:35 np0005475493 podman[75946]: 2025-10-08 09:43:35.288722789 +0000 UTC m=+0.023824263 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:35 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:35 np0005475493 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 75978 (sysctl)
Oct  8 05:43:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/762ff2f03f17c75d6e209d5d454bce38f82c7719a94a57829218b752f84a6867/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:35 np0005475493 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct  8 05:43:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/762ff2f03f17c75d6e209d5d454bce38f82c7719a94a57829218b752f84a6867/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/762ff2f03f17c75d6e209d5d454bce38f82c7719a94a57829218b752f84a6867/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:35 np0005475493 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct  8 05:43:35 np0005475493 podman[75946]: 2025-10-08 09:43:35.426760436 +0000 UTC m=+0.161861910 container init 9b4eac011805acaafc5ad70e34e9ad2a5162e05957012a201bfcfd4e66d44923 (image=quay.io/ceph/ceph:v19, name=vigilant_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:43:35 np0005475493 podman[75946]: 2025-10-08 09:43:35.434259646 +0000 UTC m=+0.169361040 container start 9b4eac011805acaafc5ad70e34e9ad2a5162e05957012a201bfcfd4e66d44923 (image=quay.io/ceph/ceph:v19, name=vigilant_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:35 np0005475493 podman[75946]: 2025-10-08 09:43:35.438128609 +0000 UTC m=+0.173230033 container attach 9b4eac011805acaafc5ad70e34e9ad2a5162e05957012a201bfcfd4e66d44923 (image=quay.io/ceph/ceph:v19, name=vigilant_haibt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:43:35 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 05:43:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct  8 05:43:35 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:35 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Added label _admin to host compute-0
Oct  8 05:43:35 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Oct  8 05:43:35 np0005475493 vigilant_haibt[75975]: Added label _admin to host compute-0
Oct  8 05:43:35 np0005475493 systemd[1]: libpod-9b4eac011805acaafc5ad70e34e9ad2a5162e05957012a201bfcfd4e66d44923.scope: Deactivated successfully.
Oct  8 05:43:35 np0005475493 podman[75946]: 2025-10-08 09:43:35.843415919 +0000 UTC m=+0.578517303 container died 9b4eac011805acaafc5ad70e34e9ad2a5162e05957012a201bfcfd4e66d44923 (image=quay.io/ceph/ceph:v19, name=vigilant_haibt, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  8 05:43:35 np0005475493 systemd[1]: var-lib-containers-storage-overlay-762ff2f03f17c75d6e209d5d454bce38f82c7719a94a57829218b752f84a6867-merged.mount: Deactivated successfully.
Oct  8 05:43:35 np0005475493 podman[75946]: 2025-10-08 09:43:35.8890265 +0000 UTC m=+0.624127904 container remove 9b4eac011805acaafc5ad70e34e9ad2a5162e05957012a201bfcfd4e66d44923 (image=quay.io/ceph/ceph:v19, name=vigilant_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct  8 05:43:35 np0005475493 systemd[1]: libpod-conmon-9b4eac011805acaafc5ad70e34e9ad2a5162e05957012a201bfcfd4e66d44923.scope: Deactivated successfully.
Oct  8 05:43:35 np0005475493 podman[76083]: 2025-10-08 09:43:35.943441321 +0000 UTC m=+0.033376536 container create 6c432e43c3eae8c2adbc6a69abb0f404ce518898050af0739df01d1a5a451a45 (image=quay.io/ceph/ceph:v19, name=upbeat_wiles, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  8 05:43:35 np0005475493 systemd[1]: Started libpod-conmon-6c432e43c3eae8c2adbc6a69abb0f404ce518898050af0739df01d1a5a451a45.scope.
Oct  8 05:43:35 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9f3da93f3e1bd0ec9b8f530e9a61afa5013691aa89bcf118e58105efe218c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9f3da93f3e1bd0ec9b8f530e9a61afa5013691aa89bcf118e58105efe218c3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9f3da93f3e1bd0ec9b8f530e9a61afa5013691aa89bcf118e58105efe218c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:36 np0005475493 podman[76083]: 2025-10-08 09:43:36.015813374 +0000 UTC m=+0.105748679 container init 6c432e43c3eae8c2adbc6a69abb0f404ce518898050af0739df01d1a5a451a45 (image=quay.io/ceph/ceph:v19, name=upbeat_wiles, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:43:36 np0005475493 podman[76083]: 2025-10-08 09:43:36.021343954 +0000 UTC m=+0.111279169 container start 6c432e43c3eae8c2adbc6a69abb0f404ce518898050af0739df01d1a5a451a45 (image=quay.io/ceph/ceph:v19, name=upbeat_wiles, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  8 05:43:36 np0005475493 podman[76083]: 2025-10-08 09:43:36.024547752 +0000 UTC m=+0.114482997 container attach 6c432e43c3eae8c2adbc6a69abb0f404ce518898050af0739df01d1a5a451a45 (image=quay.io/ceph/ceph:v19, name=upbeat_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:36 np0005475493 podman[76083]: 2025-10-08 09:43:35.929275926 +0000 UTC m=+0.019211171 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:43:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:36 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:36 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Oct  8 05:43:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4076938877' entity='client.admin' 
Oct  8 05:43:36 np0005475493 upbeat_wiles[76101]: set mgr/dashboard/cluster/status
Oct  8 05:43:36 np0005475493 ceph-mgr[73869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  8 05:43:36 np0005475493 systemd[1]: libpod-6c432e43c3eae8c2adbc6a69abb0f404ce518898050af0739df01d1a5a451a45.scope: Deactivated successfully.
Oct  8 05:43:36 np0005475493 podman[76083]: 2025-10-08 09:43:36.53563914 +0000 UTC m=+0.625574355 container died 6c432e43c3eae8c2adbc6a69abb0f404ce518898050af0739df01d1a5a451a45 (image=quay.io/ceph/ceph:v19, name=upbeat_wiles, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  8 05:43:36 np0005475493 systemd[1]: var-lib-containers-storage-overlay-7d9f3da93f3e1bd0ec9b8f530e9a61afa5013691aa89bcf118e58105efe218c3-merged.mount: Deactivated successfully.
Oct  8 05:43:36 np0005475493 podman[76083]: 2025-10-08 09:43:36.57471697 +0000 UTC m=+0.664652185 container remove 6c432e43c3eae8c2adbc6a69abb0f404ce518898050af0739df01d1a5a451a45 (image=quay.io/ceph/ceph:v19, name=upbeat_wiles, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:43:36 np0005475493 systemd[1]: libpod-conmon-6c432e43c3eae8c2adbc6a69abb0f404ce518898050af0739df01d1a5a451a45.scope: Deactivated successfully.
Oct  8 05:43:36 np0005475493 podman[76243]: 2025-10-08 09:43:36.665612862 +0000 UTC m=+0.040681471 container create a3a927f750cd8eb17e3bdfde9aeb15dba8db8215490a073087561a163d19d8d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:43:36 np0005475493 systemd[1]: Started libpod-conmon-a3a927f750cd8eb17e3bdfde9aeb15dba8db8215490a073087561a163d19d8d7.scope.
Oct  8 05:43:36 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:36 np0005475493 podman[76243]: 2025-10-08 09:43:36.646071362 +0000 UTC m=+0.021140011 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:43:36 np0005475493 podman[76243]: 2025-10-08 09:43:36.743640129 +0000 UTC m=+0.118708818 container init a3a927f750cd8eb17e3bdfde9aeb15dba8db8215490a073087561a163d19d8d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_brattain, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct  8 05:43:36 np0005475493 podman[76243]: 2025-10-08 09:43:36.748658113 +0000 UTC m=+0.123726722 container start a3a927f750cd8eb17e3bdfde9aeb15dba8db8215490a073087561a163d19d8d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  8 05:43:36 np0005475493 podman[76243]: 2025-10-08 09:43:36.752612915 +0000 UTC m=+0.127681554 container attach a3a927f750cd8eb17e3bdfde9aeb15dba8db8215490a073087561a163d19d8d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_brattain, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:43:36 np0005475493 frosty_brattain[76259]: 167 167
Oct  8 05:43:36 np0005475493 systemd[1]: libpod-a3a927f750cd8eb17e3bdfde9aeb15dba8db8215490a073087561a163d19d8d7.scope: Deactivated successfully.
Oct  8 05:43:36 np0005475493 podman[76243]: 2025-10-08 09:43:36.754479122 +0000 UTC m=+0.129547721 container died a3a927f750cd8eb17e3bdfde9aeb15dba8db8215490a073087561a163d19d8d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  8 05:43:36 np0005475493 systemd[1]: var-lib-containers-storage-overlay-152182e9440dfd12b7ef2a274c6b24dda8417d46d938a5e295e6281a40eafd4d-merged.mount: Deactivated successfully.
Oct  8 05:43:36 np0005475493 podman[76243]: 2025-10-08 09:43:36.794407668 +0000 UTC m=+0.169476267 container remove a3a927f750cd8eb17e3bdfde9aeb15dba8db8215490a073087561a163d19d8d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_brattain, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:43:36 np0005475493 systemd[1]: libpod-conmon-a3a927f750cd8eb17e3bdfde9aeb15dba8db8215490a073087561a163d19d8d7.scope: Deactivated successfully.
Oct  8 05:43:37 np0005475493 podman[76304]: 2025-10-08 09:43:37.036485593 +0000 UTC m=+0.059714134 container create dc44986207888b3692f259a036a12a39bd6f696f7c53102f0edac29a0f16630d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  8 05:43:37 np0005475493 systemd[1]: Started libpod-conmon-dc44986207888b3692f259a036a12a39bd6f696f7c53102f0edac29a0f16630d.scope.
Oct  8 05:43:37 np0005475493 podman[76304]: 2025-10-08 09:43:37.015014844 +0000 UTC m=+0.038243415 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:43:37 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/304b9367aaefcc351c0b438294abcf52b35b659af01a0ad35f073f047fbdb432/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/304b9367aaefcc351c0b438294abcf52b35b659af01a0ad35f073f047fbdb432/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/304b9367aaefcc351c0b438294abcf52b35b659af01a0ad35f073f047fbdb432/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/304b9367aaefcc351c0b438294abcf52b35b659af01a0ad35f073f047fbdb432/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:37 np0005475493 podman[76304]: 2025-10-08 09:43:37.143306304 +0000 UTC m=+0.166534915 container init dc44986207888b3692f259a036a12a39bd6f696f7c53102f0edac29a0f16630d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:43:37 np0005475493 podman[76304]: 2025-10-08 09:43:37.156009135 +0000 UTC m=+0.179237706 container start dc44986207888b3692f259a036a12a39bd6f696f7c53102f0edac29a0f16630d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct  8 05:43:37 np0005475493 podman[76304]: 2025-10-08 09:43:37.160891124 +0000 UTC m=+0.184119755 container attach dc44986207888b3692f259a036a12a39bd6f696f7c53102f0edac29a0f16630d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_heyrovsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  8 05:43:37 np0005475493 python3[76307]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:43:37 np0005475493 ceph-mon[73572]: Added label _admin to host compute-0
Oct  8 05:43:37 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/4076938877' entity='client.admin' 
Oct  8 05:43:37 np0005475493 podman[76327]: 2025-10-08 09:43:37.292479686 +0000 UTC m=+0.072359263 container create 082e130216421cfa7b3c73d9d3eab37c07ade02738a491dcdba6715202b6d25a (image=quay.io/ceph/ceph:v19, name=ecstatic_merkle, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  8 05:43:37 np0005475493 systemd[1]: Started libpod-conmon-082e130216421cfa7b3c73d9d3eab37c07ade02738a491dcdba6715202b6d25a.scope.
Oct  8 05:43:37 np0005475493 podman[76327]: 2025-10-08 09:43:37.262240687 +0000 UTC m=+0.042120334 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:37 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b04fc885ce2b3a7a63e9d44de7d9ab3300af15034edb2d45945015c32842f485/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b04fc885ce2b3a7a63e9d44de7d9ab3300af15034edb2d45945015c32842f485/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:37 np0005475493 podman[76327]: 2025-10-08 09:43:37.419816227 +0000 UTC m=+0.199695844 container init 082e130216421cfa7b3c73d9d3eab37c07ade02738a491dcdba6715202b6d25a (image=quay.io/ceph/ceph:v19, name=ecstatic_merkle, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct  8 05:43:37 np0005475493 podman[76327]: 2025-10-08 09:43:37.426656967 +0000 UTC m=+0.206536544 container start 082e130216421cfa7b3c73d9d3eab37c07ade02738a491dcdba6715202b6d25a (image=quay.io/ceph/ceph:v19, name=ecstatic_merkle, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:43:37 np0005475493 podman[76327]: 2025-10-08 09:43:37.430545287 +0000 UTC m=+0.210424844 container attach 082e130216421cfa7b3c73d9d3eab37c07ade02738a491dcdba6715202b6d25a (image=quay.io/ceph/ceph:v19, name=ecstatic_merkle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct  8 05:43:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Oct  8 05:43:37 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/948747476' entity='client.admin' 
Oct  8 05:43:37 np0005475493 systemd[1]: libpod-082e130216421cfa7b3c73d9d3eab37c07ade02738a491dcdba6715202b6d25a.scope: Deactivated successfully.
Oct  8 05:43:37 np0005475493 podman[76327]: 2025-10-08 09:43:37.855260832 +0000 UTC m=+0.635140409 container died 082e130216421cfa7b3c73d9d3eab37c07ade02738a491dcdba6715202b6d25a (image=quay.io/ceph/ceph:v19, name=ecstatic_merkle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:43:37 np0005475493 systemd[1]: var-lib-containers-storage-overlay-b04fc885ce2b3a7a63e9d44de7d9ab3300af15034edb2d45945015c32842f485-merged.mount: Deactivated successfully.
Oct  8 05:43:37 np0005475493 podman[76327]: 2025-10-08 09:43:37.898139429 +0000 UTC m=+0.678018966 container remove 082e130216421cfa7b3c73d9d3eab37c07ade02738a491dcdba6715202b6d25a (image=quay.io/ceph/ceph:v19, name=ecstatic_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:43:37 np0005475493 systemd[1]: libpod-conmon-082e130216421cfa7b3c73d9d3eab37c07ade02738a491dcdba6715202b6d25a.scope: Deactivated successfully.
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]: [
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:    {
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:        "available": false,
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:        "being_replaced": false,
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:        "ceph_device_lvm": false,
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:        "lsm_data": {},
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:        "lvs": [],
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:        "path": "/dev/sr0",
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:        "rejected_reasons": [
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            "Has a FileSystem",
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            "Insufficient space (<5GB)"
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:        ],
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:        "sys_api": {
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            "actuators": null,
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            "device_nodes": [
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:                "sr0"
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            ],
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            "devname": "sr0",
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            "human_readable_size": "482.00 KB",
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            "id_bus": "ata",
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            "model": "QEMU DVD-ROM",
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            "nr_requests": "2",
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            "parent": "/dev/sr0",
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            "partitions": {},
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            "path": "/dev/sr0",
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            "removable": "1",
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            "rev": "2.5+",
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            "ro": "0",
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            "rotational": "0",
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            "sas_address": "",
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            "sas_device_handle": "",
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            "scheduler_mode": "mq-deadline",
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            "sectors": 0,
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            "sectorsize": "2048",
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            "size": 493568.0,
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            "support_discard": "2048",
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            "type": "disk",
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:            "vendor": "QEMU"
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:        }
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]:    }
Oct  8 05:43:37 np0005475493 distracted_heyrovsky[76322]: ]
Oct  8 05:43:37 np0005475493 systemd[1]: libpod-dc44986207888b3692f259a036a12a39bd6f696f7c53102f0edac29a0f16630d.scope: Deactivated successfully.
Oct  8 05:43:38 np0005475493 podman[77318]: 2025-10-08 09:43:38.032789444 +0000 UTC m=+0.029127615 container died dc44986207888b3692f259a036a12a39bd6f696f7c53102f0edac29a0f16630d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:38 np0005475493 systemd[1]: var-lib-containers-storage-overlay-304b9367aaefcc351c0b438294abcf52b35b659af01a0ad35f073f047fbdb432-merged.mount: Deactivated successfully.
Oct  8 05:43:38 np0005475493 podman[77318]: 2025-10-08 09:43:38.077441196 +0000 UTC m=+0.073779267 container remove dc44986207888b3692f259a036a12a39bd6f696f7c53102f0edac29a0f16630d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct  8 05:43:38 np0005475493 systemd[1]: libpod-conmon-dc44986207888b3692f259a036a12a39bd6f696f7c53102f0edac29a0f16630d.scope: Deactivated successfully.
Oct  8 05:43:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:43:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:43:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:43:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:43:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct  8 05:43:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  8 05:43:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:43:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:43:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 05:43:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:43:38 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct  8 05:43:38 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct  8 05:43:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:43:38 np0005475493 ceph-mgr[73869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  8 05:43:38 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:43:38 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:43:38 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/948747476' entity='client.admin' 
Oct  8 05:43:38 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:38 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:38 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:38 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:38 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  8 05:43:38 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:43:38 np0005475493 ceph-mon[73572]: Updating compute-0:/etc/ceph/ceph.conf
Oct  8 05:43:39 np0005475493 ansible-async_wrapper.py[77730]: Invoked with j735635272403 30 /home/zuul/.ansible/tmp/ansible-tmp-1759916618.3351061-33583-210432935456000/AnsiballZ_command.py _
Oct  8 05:43:39 np0005475493 ansible-async_wrapper.py[77782]: Starting module and watcher
Oct  8 05:43:39 np0005475493 ansible-async_wrapper.py[77782]: Start watching 77783 (30)
Oct  8 05:43:39 np0005475493 ansible-async_wrapper.py[77783]: Start module (77783)
Oct  8 05:43:39 np0005475493 ansible-async_wrapper.py[77730]: Return async_wrapper task started.
Oct  8 05:43:39 np0005475493 python3[77785]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:43:39 np0005475493 podman[77844]: 2025-10-08 09:43:39.278501546 +0000 UTC m=+0.042928330 container create 179eed01d974bb4a03908dd3db019d026f24ee3be1e33b354552a3cb8175e5b9 (image=quay.io/ceph/ceph:v19, name=stupefied_gauss, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:43:39 np0005475493 systemd[1]: Started libpod-conmon-179eed01d974bb4a03908dd3db019d026f24ee3be1e33b354552a3cb8175e5b9.scope.
Oct  8 05:43:39 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:39 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27249e971f01809786676cb2ca44f2ddac5ba0f44fbc05e3469ae12b95201df2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:39 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27249e971f01809786676cb2ca44f2ddac5ba0f44fbc05e3469ae12b95201df2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:39 np0005475493 podman[77844]: 2025-10-08 09:43:39.262679689 +0000 UTC m=+0.027106493 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:39 np0005475493 podman[77844]: 2025-10-08 09:43:39.380194479 +0000 UTC m=+0.144621273 container init 179eed01d974bb4a03908dd3db019d026f24ee3be1e33b354552a3cb8175e5b9 (image=quay.io/ceph/ceph:v19, name=stupefied_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  8 05:43:39 np0005475493 podman[77844]: 2025-10-08 09:43:39.390484736 +0000 UTC m=+0.154911540 container start 179eed01d974bb4a03908dd3db019d026f24ee3be1e33b354552a3cb8175e5b9 (image=quay.io/ceph/ceph:v19, name=stupefied_gauss, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:43:39 np0005475493 podman[77844]: 2025-10-08 09:43:39.394633212 +0000 UTC m=+0.159060046 container attach 179eed01d974bb4a03908dd3db019d026f24ee3be1e33b354552a3cb8175e5b9 (image=quay.io/ceph/ceph:v19, name=stupefied_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  8 05:43:39 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:43:39 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:43:39 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  8 05:43:39 np0005475493 stupefied_gauss[77899]: 
Oct  8 05:43:39 np0005475493 stupefied_gauss[77899]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  8 05:43:39 np0005475493 systemd[1]: libpod-179eed01d974bb4a03908dd3db019d026f24ee3be1e33b354552a3cb8175e5b9.scope: Deactivated successfully.
Oct  8 05:43:39 np0005475493 podman[77844]: 2025-10-08 09:43:39.788810339 +0000 UTC m=+0.553237123 container died 179eed01d974bb4a03908dd3db019d026f24ee3be1e33b354552a3cb8175e5b9 (image=quay.io/ceph/ceph:v19, name=stupefied_gauss, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:39 np0005475493 systemd[1]: var-lib-containers-storage-overlay-27249e971f01809786676cb2ca44f2ddac5ba0f44fbc05e3469ae12b95201df2-merged.mount: Deactivated successfully.
Oct  8 05:43:39 np0005475493 podman[77844]: 2025-10-08 09:43:39.840000932 +0000 UTC m=+0.604427726 container remove 179eed01d974bb4a03908dd3db019d026f24ee3be1e33b354552a3cb8175e5b9 (image=quay.io/ceph/ceph:v19, name=stupefied_gauss, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:43:39 np0005475493 ceph-mon[73572]: Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:43:39 np0005475493 systemd[1]: libpod-conmon-179eed01d974bb4a03908dd3db019d026f24ee3be1e33b354552a3cb8175e5b9.scope: Deactivated successfully.
Oct  8 05:43:39 np0005475493 ansible-async_wrapper.py[77783]: Module complete (77783)
Oct  8 05:43:40 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:43:40 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:43:40 np0005475493 ceph-mgr[73869]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  8 05:43:40 np0005475493 python3[78385]: ansible-ansible.legacy.async_status Invoked with jid=j735635272403.77730 mode=status _async_dir=/root/.ansible_async
Oct  8 05:43:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:43:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:43:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 05:43:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:40 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev 25a7b103-1f46-4154-b4d3-4ab41f29742b (Updating crash deployment (+1 -> 1))
Oct  8 05:43:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct  8 05:43:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  8 05:43:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  8 05:43:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:43:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:43:40 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Oct  8 05:43:40 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Oct  8 05:43:40 np0005475493 ceph-mon[73572]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:43:40 np0005475493 ceph-mon[73572]: Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:43:40 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:40 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:40 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:40 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  8 05:43:40 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  8 05:43:40 np0005475493 python3[78552]: ansible-ansible.legacy.async_status Invoked with jid=j735635272403.77730 mode=cleanup _async_dir=/root/.ansible_async
Oct  8 05:43:41 np0005475493 podman[78623]: 2025-10-08 09:43:41.145055486 +0000 UTC m=+0.037467262 container create 0f13b2bd85961e334f73a4ab7ee10aa36eb6a315e56a3998ab4cb3915b013a87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mcclintock, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  8 05:43:41 np0005475493 systemd[1]: Started libpod-conmon-0f13b2bd85961e334f73a4ab7ee10aa36eb6a315e56a3998ab4cb3915b013a87.scope.
Oct  8 05:43:41 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:41 np0005475493 podman[78623]: 2025-10-08 09:43:41.130770227 +0000 UTC m=+0.023182023 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:43:41 np0005475493 podman[78623]: 2025-10-08 09:43:41.232282775 +0000 UTC m=+0.124694581 container init 0f13b2bd85961e334f73a4ab7ee10aa36eb6a315e56a3998ab4cb3915b013a87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mcclintock, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:43:41 np0005475493 podman[78623]: 2025-10-08 09:43:41.238600319 +0000 UTC m=+0.131012125 container start 0f13b2bd85961e334f73a4ab7ee10aa36eb6a315e56a3998ab4cb3915b013a87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:43:41 np0005475493 lucid_mcclintock[78639]: 167 167
Oct  8 05:43:41 np0005475493 systemd[1]: libpod-0f13b2bd85961e334f73a4ab7ee10aa36eb6a315e56a3998ab4cb3915b013a87.scope: Deactivated successfully.
Oct  8 05:43:41 np0005475493 podman[78623]: 2025-10-08 09:43:41.242747726 +0000 UTC m=+0.135159522 container attach 0f13b2bd85961e334f73a4ab7ee10aa36eb6a315e56a3998ab4cb3915b013a87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mcclintock, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  8 05:43:41 np0005475493 podman[78623]: 2025-10-08 09:43:41.243002154 +0000 UTC m=+0.135413950 container died 0f13b2bd85961e334f73a4ab7ee10aa36eb6a315e56a3998ab4cb3915b013a87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  8 05:43:41 np0005475493 systemd[1]: var-lib-containers-storage-overlay-787f82196a0829ebe12cd6ecb93d3c4be046e8791b6e1816f762cdecf98571db-merged.mount: Deactivated successfully.
Oct  8 05:43:41 np0005475493 podman[78623]: 2025-10-08 09:43:41.277167514 +0000 UTC m=+0.169579290 container remove 0f13b2bd85961e334f73a4ab7ee10aa36eb6a315e56a3998ab4cb3915b013a87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:43:41 np0005475493 systemd[1]: libpod-conmon-0f13b2bd85961e334f73a4ab7ee10aa36eb6a315e56a3998ab4cb3915b013a87.scope: Deactivated successfully.
Oct  8 05:43:41 np0005475493 systemd[1]: Reloading.
Oct  8 05:43:41 np0005475493 python3[78670]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:43:41 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:43:41 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:43:41 np0005475493 systemd[1]: Reloading.
Oct  8 05:43:41 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:43:41 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:43:41 np0005475493 ceph-mon[73572]: Deploying daemon crash.compute-0 on compute-0
Oct  8 05:43:41 np0005475493 systemd[1]: Starting Ceph crash.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:43:42 np0005475493 python3[78788]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:43:42 np0005475493 podman[78830]: 2025-10-08 09:43:42.1668558 +0000 UTC m=+0.062216762 container create b397c2eb05b79f4dbe5166fc9ce9ec74d492714eade3bed1c5a2f56d1061bd45 (image=quay.io/ceph/ceph:v19, name=gracious_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:43:42 np0005475493 podman[78836]: 2025-10-08 09:43:42.173072181 +0000 UTC m=+0.057911660 container create f2b90c859a7310489a10feb4ada2b4bf5595269880e09d000b5461d6bc9e0698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct  8 05:43:42 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298ad304a810b11c2de94c3170e19f6087dccaf6328800bae4fc4a34e9d5f5b5/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:42 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298ad304a810b11c2de94c3170e19f6087dccaf6328800bae4fc4a34e9d5f5b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:42 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298ad304a810b11c2de94c3170e19f6087dccaf6328800bae4fc4a34e9d5f5b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:42 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298ad304a810b11c2de94c3170e19f6087dccaf6328800bae4fc4a34e9d5f5b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:42 np0005475493 systemd[1]: Started libpod-conmon-b397c2eb05b79f4dbe5166fc9ce9ec74d492714eade3bed1c5a2f56d1061bd45.scope.
Oct  8 05:43:42 np0005475493 podman[78836]: 2025-10-08 09:43:42.234675553 +0000 UTC m=+0.119515032 container init f2b90c859a7310489a10feb4ada2b4bf5595269880e09d000b5461d6bc9e0698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  8 05:43:42 np0005475493 podman[78830]: 2025-10-08 09:43:42.144540035 +0000 UTC m=+0.039901087 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:42 np0005475493 podman[78836]: 2025-10-08 09:43:42.149155206 +0000 UTC m=+0.033994705 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:43:42 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:42 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f57fec2920682b544793e447e9a3500e0326a9c9f4c51ec3ae862c6378011c6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:42 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f57fec2920682b544793e447e9a3500e0326a9c9f4c51ec3ae862c6378011c6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:42 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f57fec2920682b544793e447e9a3500e0326a9c9f4c51ec3ae862c6378011c6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:42 np0005475493 podman[78836]: 2025-10-08 09:43:42.25020127 +0000 UTC m=+0.135040759 container start f2b90c859a7310489a10feb4ada2b4bf5595269880e09d000b5461d6bc9e0698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:43:42 np0005475493 bash[78836]: f2b90c859a7310489a10feb4ada2b4bf5595269880e09d000b5461d6bc9e0698
Oct  8 05:43:42 np0005475493 podman[78830]: 2025-10-08 09:43:42.263339043 +0000 UTC m=+0.158700005 container init b397c2eb05b79f4dbe5166fc9ce9ec74d492714eade3bed1c5a2f56d1061bd45 (image=quay.io/ceph/ceph:v19, name=gracious_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  8 05:43:42 np0005475493 systemd[1]: Started Ceph crash.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:43:42 np0005475493 podman[78830]: 2025-10-08 09:43:42.276312682 +0000 UTC m=+0.171673634 container start b397c2eb05b79f4dbe5166fc9ce9ec74d492714eade3bed1c5a2f56d1061bd45 (image=quay.io/ceph/ceph:v19, name=gracious_allen, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:42 np0005475493 podman[78830]: 2025-10-08 09:43:42.280248922 +0000 UTC m=+0.175609904 container attach b397c2eb05b79f4dbe5166fc9ce9ec74d492714eade3bed1c5a2f56d1061bd45 (image=quay.io/ceph/ceph:v19, name=gracious_allen, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: INFO:ceph-crash:pinging cluster to exercise our key
Oct  8 05:43:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:43:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:43:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct  8 05:43:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:42 np0005475493 ceph-mgr[73869]: [progress INFO root] complete: finished ev 25a7b103-1f46-4154-b4d3-4ab41f29742b (Updating crash deployment (+1 -> 1))
Oct  8 05:43:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct  8 05:43:42 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event 25a7b103-1f46-4154-b4d3-4ab41f29742b (Updating crash deployment (+1 -> 1)) in 2 seconds
Oct  8 05:43:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct  8 05:43:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct  8 05:43:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: 2025-10-08T09:43:42.433+0000 7fdfd4548640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  8 05:43:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: 2025-10-08T09:43:42.433+0000 7fdfd4548640 -1 AuthRegistry(0x7fdfcc0698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  8 05:43:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: 2025-10-08T09:43:42.434+0000 7fdfd4548640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  8 05:43:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: 2025-10-08T09:43:42.434+0000 7fdfd4548640 -1 AuthRegistry(0x7fdfd4546ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  8 05:43:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: 2025-10-08T09:43:42.437+0000 7fdfd22bd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct  8 05:43:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: 2025-10-08T09:43:42.437+0000 7fdfd4548640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct  8 05:43:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct  8 05:43:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct  8 05:43:42 np0005475493 ceph-mgr[73869]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Oct  8 05:43:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:43:42 np0005475493 ceph-mon[73572]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct  8 05:43:42 np0005475493 ceph-mgr[73869]: [progress INFO root] Writing back 1 completed events
Oct  8 05:43:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  8 05:43:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:42 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  8 05:43:42 np0005475493 gracious_allen[78868]: 
Oct  8 05:43:42 np0005475493 gracious_allen[78868]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  8 05:43:42 np0005475493 systemd[1]: libpod-b397c2eb05b79f4dbe5166fc9ce9ec74d492714eade3bed1c5a2f56d1061bd45.scope: Deactivated successfully.
Oct  8 05:43:42 np0005475493 conmon[78868]: conmon b397c2eb05b79f4dbe51 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b397c2eb05b79f4dbe5166fc9ce9ec74d492714eade3bed1c5a2f56d1061bd45.scope/container/memory.events
Oct  8 05:43:42 np0005475493 podman[78830]: 2025-10-08 09:43:42.66696107 +0000 UTC m=+0.562322032 container died b397c2eb05b79f4dbe5166fc9ce9ec74d492714eade3bed1c5a2f56d1061bd45 (image=quay.io/ceph/ceph:v19, name=gracious_allen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct  8 05:43:42 np0005475493 systemd[1]: var-lib-containers-storage-overlay-5f57fec2920682b544793e447e9a3500e0326a9c9f4c51ec3ae862c6378011c6-merged.mount: Deactivated successfully.
Oct  8 05:43:42 np0005475493 podman[78830]: 2025-10-08 09:43:42.716703558 +0000 UTC m=+0.612064520 container remove b397c2eb05b79f4dbe5166fc9ce9ec74d492714eade3bed1c5a2f56d1061bd45 (image=quay.io/ceph/ceph:v19, name=gracious_allen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  8 05:43:42 np0005475493 systemd[1]: libpod-conmon-b397c2eb05b79f4dbe5166fc9ce9ec74d492714eade3bed1c5a2f56d1061bd45.scope: Deactivated successfully.
Oct  8 05:43:43 np0005475493 podman[79070]: 2025-10-08 09:43:43.00816378 +0000 UTC m=+0.079884124 container exec 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct  8 05:43:43 np0005475493 podman[79070]: 2025-10-08 09:43:43.092202831 +0000 UTC m=+0.163923175 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct  8 05:43:43 np0005475493 python3[79115]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:43:43 np0005475493 podman[79145]: 2025-10-08 09:43:43.270267651 +0000 UTC m=+0.044463747 container create 46e18ac38c8a0cba92d0c06114e6122aabc624f3bc68651da0ca4d77ae2907ba (image=quay.io/ceph/ceph:v19, name=youthful_bhabha, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:43:43 np0005475493 systemd[1]: Started libpod-conmon-46e18ac38c8a0cba92d0c06114e6122aabc624f3bc68651da0ca4d77ae2907ba.scope.
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:43 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:43 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e7ea5ce9a534376979795f206d43f17df78353c96da16ca2ac0c9b2b992e7e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:43 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e7ea5ce9a534376979795f206d43f17df78353c96da16ca2ac0c9b2b992e7e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:43 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e7ea5ce9a534376979795f206d43f17df78353c96da16ca2ac0c9b2b992e7e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:43 np0005475493 podman[79145]: 2025-10-08 09:43:43.25298676 +0000 UTC m=+0.027182866 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:43 np0005475493 podman[79145]: 2025-10-08 09:43:43.355519469 +0000 UTC m=+0.129715585 container init 46e18ac38c8a0cba92d0c06114e6122aabc624f3bc68651da0ca4d77ae2907ba (image=quay.io/ceph/ceph:v19, name=youthful_bhabha, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:43:43 np0005475493 podman[79145]: 2025-10-08 09:43:43.361349268 +0000 UTC m=+0.135545354 container start 46e18ac38c8a0cba92d0c06114e6122aabc624f3bc68651da0ca4d77ae2907ba (image=quay.io/ceph/ceph:v19, name=youthful_bhabha, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:43:43 np0005475493 podman[79145]: 2025-10-08 09:43:43.364209736 +0000 UTC m=+0.138405832 container attach 46e18ac38c8a0cba92d0c06114e6122aabc624f3bc68651da0ca4d77ae2907ba (image=quay.io/ceph/ceph:v19, name=youthful_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:43 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Oct  8 05:43:43 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:43:43 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct  8 05:43:43 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Oct  8 05:43:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3914095065' entity='client.admin' 
Oct  8 05:43:43 np0005475493 systemd[1]: libpod-46e18ac38c8a0cba92d0c06114e6122aabc624f3bc68651da0ca4d77ae2907ba.scope: Deactivated successfully.
Oct  8 05:43:43 np0005475493 podman[79145]: 2025-10-08 09:43:43.735118648 +0000 UTC m=+0.509314734 container died 46e18ac38c8a0cba92d0c06114e6122aabc624f3bc68651da0ca4d77ae2907ba (image=quay.io/ceph/ceph:v19, name=youthful_bhabha, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  8 05:43:43 np0005475493 systemd[1]: var-lib-containers-storage-overlay-c7e7ea5ce9a534376979795f206d43f17df78353c96da16ca2ac0c9b2b992e7e-merged.mount: Deactivated successfully.
Oct  8 05:43:43 np0005475493 podman[79145]: 2025-10-08 09:43:43.803960172 +0000 UTC m=+0.578156258 container remove 46e18ac38c8a0cba92d0c06114e6122aabc624f3bc68651da0ca4d77ae2907ba (image=quay.io/ceph/ceph:v19, name=youthful_bhabha, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  8 05:43:43 np0005475493 systemd[1]: libpod-conmon-46e18ac38c8a0cba92d0c06114e6122aabc624f3bc68651da0ca4d77ae2907ba.scope: Deactivated successfully.
Oct  8 05:43:43 np0005475493 podman[79307]: 2025-10-08 09:43:43.892269414 +0000 UTC m=+0.034793549 container create c7b130b0dfc12fe33591e5637e609cdd623a85a558f717ac19ac251d1dfd4dc8 (image=quay.io/ceph/ceph:v19, name=brave_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:43 np0005475493 systemd[1]: Started libpod-conmon-c7b130b0dfc12fe33591e5637e609cdd623a85a558f717ac19ac251d1dfd4dc8.scope.
Oct  8 05:43:43 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:43 np0005475493 podman[79307]: 2025-10-08 09:43:43.942120936 +0000 UTC m=+0.084645091 container init c7b130b0dfc12fe33591e5637e609cdd623a85a558f717ac19ac251d1dfd4dc8 (image=quay.io/ceph/ceph:v19, name=brave_matsumoto, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:43:43 np0005475493 podman[79307]: 2025-10-08 09:43:43.948177063 +0000 UTC m=+0.090701198 container start c7b130b0dfc12fe33591e5637e609cdd623a85a558f717ac19ac251d1dfd4dc8 (image=quay.io/ceph/ceph:v19, name=brave_matsumoto, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  8 05:43:43 np0005475493 podman[79307]: 2025-10-08 09:43:43.951183884 +0000 UTC m=+0.093708039 container attach c7b130b0dfc12fe33591e5637e609cdd623a85a558f717ac19ac251d1dfd4dc8 (image=quay.io/ceph/ceph:v19, name=brave_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:43:43 np0005475493 brave_matsumoto[79324]: 167 167
Oct  8 05:43:43 np0005475493 systemd[1]: libpod-c7b130b0dfc12fe33591e5637e609cdd623a85a558f717ac19ac251d1dfd4dc8.scope: Deactivated successfully.
Oct  8 05:43:43 np0005475493 podman[79307]: 2025-10-08 09:43:43.952143734 +0000 UTC m=+0.094667869 container died c7b130b0dfc12fe33591e5637e609cdd623a85a558f717ac19ac251d1dfd4dc8 (image=quay.io/ceph/ceph:v19, name=brave_matsumoto, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:43:43 np0005475493 systemd[1]: var-lib-containers-storage-overlay-56fb366c8aa6e88c2bd3f00d14312ca637745a404d169cfc9e68a34c24d130bb-merged.mount: Deactivated successfully.
Oct  8 05:43:43 np0005475493 podman[79307]: 2025-10-08 09:43:43.877473891 +0000 UTC m=+0.019998046 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:43 np0005475493 podman[79307]: 2025-10-08 09:43:43.983192427 +0000 UTC m=+0.125716562 container remove c7b130b0dfc12fe33591e5637e609cdd623a85a558f717ac19ac251d1dfd4dc8 (image=quay.io/ceph/ceph:v19, name=brave_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:43:43 np0005475493 systemd[1]: libpod-conmon-c7b130b0dfc12fe33591e5637e609cdd623a85a558f717ac19ac251d1dfd4dc8.scope: Deactivated successfully.
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:44 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.ixicfj (unknown last config time)...
Oct  8 05:43:44 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.ixicfj (unknown last config time)...
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.ixicfj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ixicfj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:43:44 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.ixicfj on compute-0
Oct  8 05:43:44 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.ixicfj on compute-0
Oct  8 05:43:44 np0005475493 ansible-async_wrapper.py[77782]: Done in kid B.
Oct  8 05:43:44 np0005475493 python3[79365]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:43:44 np0005475493 podman[79394]: 2025-10-08 09:43:44.155680075 +0000 UTC m=+0.035505971 container create a44bc277c66ea519eb36c35827901effd92af7b0b2efb67fe855132bbd46130f (image=quay.io/ceph/ceph:v19, name=relaxed_joliot, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:43:44 np0005475493 systemd[1]: Started libpod-conmon-a44bc277c66ea519eb36c35827901effd92af7b0b2efb67fe855132bbd46130f.scope.
Oct  8 05:43:44 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2175227c24a0970519a32ca5fba05acaadcece1a2c593dd6dd4d8bf735c34e4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2175227c24a0970519a32ca5fba05acaadcece1a2c593dd6dd4d8bf735c34e4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2175227c24a0970519a32ca5fba05acaadcece1a2c593dd6dd4d8bf735c34e4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:44 np0005475493 podman[79394]: 2025-10-08 09:43:44.232082092 +0000 UTC m=+0.111907978 container init a44bc277c66ea519eb36c35827901effd92af7b0b2efb67fe855132bbd46130f (image=quay.io/ceph/ceph:v19, name=relaxed_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  8 05:43:44 np0005475493 podman[79394]: 2025-10-08 09:43:44.139895691 +0000 UTC m=+0.019721607 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:44 np0005475493 podman[79394]: 2025-10-08 09:43:44.237406115 +0000 UTC m=+0.117232011 container start a44bc277c66ea519eb36c35827901effd92af7b0b2efb67fe855132bbd46130f (image=quay.io/ceph/ceph:v19, name=relaxed_joliot, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct  8 05:43:44 np0005475493 podman[79394]: 2025-10-08 09:43:44.240342176 +0000 UTC m=+0.120168072 container attach a44bc277c66ea519eb36c35827901effd92af7b0b2efb67fe855132bbd46130f (image=quay.io/ceph/ceph:v19, name=relaxed_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/3914095065' entity='client.admin' 
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ixicfj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  8 05:43:44 np0005475493 podman[79470]: 2025-10-08 09:43:44.461127927 +0000 UTC m=+0.048361716 container create e75dfb2c305fde73906843cc8aaeb5511c9da27f86d5e20b07f4faed81ada41e (image=quay.io/ceph/ceph:v19, name=affectionate_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  8 05:43:44 np0005475493 systemd[1]: Started libpod-conmon-e75dfb2c305fde73906843cc8aaeb5511c9da27f86d5e20b07f4faed81ada41e.scope.
Oct  8 05:43:44 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:43:44 np0005475493 podman[79470]: 2025-10-08 09:43:44.441511254 +0000 UTC m=+0.028745073 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:44 np0005475493 podman[79470]: 2025-10-08 09:43:44.552501154 +0000 UTC m=+0.139735013 container init e75dfb2c305fde73906843cc8aaeb5511c9da27f86d5e20b07f4faed81ada41e (image=quay.io/ceph/ceph:v19, name=affectionate_merkle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  8 05:43:44 np0005475493 podman[79470]: 2025-10-08 09:43:44.562822081 +0000 UTC m=+0.150055890 container start e75dfb2c305fde73906843cc8aaeb5511c9da27f86d5e20b07f4faed81ada41e (image=quay.io/ceph/ceph:v19, name=affectionate_merkle, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:43:44 np0005475493 affectionate_merkle[79487]: 167 167
Oct  8 05:43:44 np0005475493 systemd[1]: libpod-e75dfb2c305fde73906843cc8aaeb5511c9da27f86d5e20b07f4faed81ada41e.scope: Deactivated successfully.
Oct  8 05:43:44 np0005475493 podman[79470]: 2025-10-08 09:43:44.572024904 +0000 UTC m=+0.159258773 container attach e75dfb2c305fde73906843cc8aaeb5511c9da27f86d5e20b07f4faed81ada41e (image=quay.io/ceph/ceph:v19, name=affectionate_merkle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:44 np0005475493 podman[79470]: 2025-10-08 09:43:44.572708854 +0000 UTC m=+0.159942653 container died e75dfb2c305fde73906843cc8aaeb5511c9da27f86d5e20b07f4faed81ada41e (image=quay.io/ceph/ceph:v19, name=affectionate_merkle, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:43:44 np0005475493 systemd[1]: var-lib-containers-storage-overlay-c604830965ad5dc36b89f6815f054c21707d02bc73ff36b35f1b1885674271ff-merged.mount: Deactivated successfully.
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2813966862' entity='client.admin' 
Oct  8 05:43:44 np0005475493 podman[79470]: 2025-10-08 09:43:44.628074785 +0000 UTC m=+0.215308564 container remove e75dfb2c305fde73906843cc8aaeb5511c9da27f86d5e20b07f4faed81ada41e (image=quay.io/ceph/ceph:v19, name=affectionate_merkle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:44 np0005475493 systemd[1]: libpod-conmon-e75dfb2c305fde73906843cc8aaeb5511c9da27f86d5e20b07f4faed81ada41e.scope: Deactivated successfully.
Oct  8 05:43:44 np0005475493 systemd[1]: libpod-a44bc277c66ea519eb36c35827901effd92af7b0b2efb67fe855132bbd46130f.scope: Deactivated successfully.
Oct  8 05:43:44 np0005475493 podman[79394]: 2025-10-08 09:43:44.640427954 +0000 UTC m=+0.520253850 container died a44bc277c66ea519eb36c35827901effd92af7b0b2efb67fe855132bbd46130f (image=quay.io/ceph/ceph:v19, name=relaxed_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:43:44 np0005475493 systemd[1]: var-lib-containers-storage-overlay-a2175227c24a0970519a32ca5fba05acaadcece1a2c593dd6dd4d8bf735c34e4-merged.mount: Deactivated successfully.
Oct  8 05:43:44 np0005475493 podman[79394]: 2025-10-08 09:43:44.680399882 +0000 UTC m=+0.560225768 container remove a44bc277c66ea519eb36c35827901effd92af7b0b2efb67fe855132bbd46130f (image=quay.io/ceph/ceph:v19, name=relaxed_joliot, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:43:44 np0005475493 systemd[1]: libpod-conmon-a44bc277c66ea519eb36c35827901effd92af7b0b2efb67fe855132bbd46130f.scope: Deactivated successfully.
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 05:43:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:45 np0005475493 python3[79567]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:43:45 np0005475493 podman[79568]: 2025-10-08 09:43:45.166495662 +0000 UTC m=+0.062407577 container create 9b8f5309c3c1dcce65bbd09bfa5c6a9f90f5273b069857082dea9e78f4613d59 (image=quay.io/ceph/ceph:v19, name=boring_elbakyan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  8 05:43:45 np0005475493 systemd[1]: Started libpod-conmon-9b8f5309c3c1dcce65bbd09bfa5c6a9f90f5273b069857082dea9e78f4613d59.scope.
Oct  8 05:43:45 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:45 np0005475493 podman[79568]: 2025-10-08 09:43:45.140574886 +0000 UTC m=+0.036486851 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:45 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a080ef48b401fca3d59b9592e810b7920e03a752a69dffe9f4fb8de05d3a4e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:45 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a080ef48b401fca3d59b9592e810b7920e03a752a69dffe9f4fb8de05d3a4e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:45 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a080ef48b401fca3d59b9592e810b7920e03a752a69dffe9f4fb8de05d3a4e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:45 np0005475493 podman[79568]: 2025-10-08 09:43:45.253543515 +0000 UTC m=+0.149455420 container init 9b8f5309c3c1dcce65bbd09bfa5c6a9f90f5273b069857082dea9e78f4613d59 (image=quay.io/ceph/ceph:v19, name=boring_elbakyan, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct  8 05:43:45 np0005475493 podman[79568]: 2025-10-08 09:43:45.264509072 +0000 UTC m=+0.160420987 container start 9b8f5309c3c1dcce65bbd09bfa5c6a9f90f5273b069857082dea9e78f4613d59 (image=quay.io/ceph/ceph:v19, name=boring_elbakyan, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  8 05:43:45 np0005475493 podman[79568]: 2025-10-08 09:43:45.268791784 +0000 UTC m=+0.164703669 container attach 9b8f5309c3c1dcce65bbd09bfa5c6a9f90f5273b069857082dea9e78f4613d59 (image=quay.io/ceph/ceph:v19, name=boring_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct  8 05:43:45 np0005475493 ceph-mon[73572]: Reconfiguring mon.compute-0 (unknown last config time)...
Oct  8 05:43:45 np0005475493 ceph-mon[73572]: Reconfiguring daemon mon.compute-0 on compute-0
Oct  8 05:43:45 np0005475493 ceph-mon[73572]: Reconfiguring mgr.compute-0.ixicfj (unknown last config time)...
Oct  8 05:43:45 np0005475493 ceph-mon[73572]: Reconfiguring daemon mgr.compute-0.ixicfj on compute-0
Oct  8 05:43:45 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/2813966862' entity='client.admin' 
Oct  8 05:43:45 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:45 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:45 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:43:45 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:45 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Oct  8 05:43:45 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3016004367' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct  8 05:43:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Oct  8 05:43:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  8 05:43:46 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/3016004367' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct  8 05:43:46 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3016004367' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct  8 05:43:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Oct  8 05:43:46 np0005475493 boring_elbakyan[79584]: set require_min_compat_client to mimic
Oct  8 05:43:46 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Oct  8 05:43:46 np0005475493 systemd[1]: libpod-9b8f5309c3c1dcce65bbd09bfa5c6a9f90f5273b069857082dea9e78f4613d59.scope: Deactivated successfully.
Oct  8 05:43:46 np0005475493 podman[79568]: 2025-10-08 09:43:46.413134092 +0000 UTC m=+1.309045977 container died 9b8f5309c3c1dcce65bbd09bfa5c6a9f90f5273b069857082dea9e78f4613d59 (image=quay.io/ceph/ceph:v19, name=boring_elbakyan, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct  8 05:43:46 np0005475493 systemd[1]: var-lib-containers-storage-overlay-39a080ef48b401fca3d59b9592e810b7920e03a752a69dffe9f4fb8de05d3a4e-merged.mount: Deactivated successfully.
Oct  8 05:43:46 np0005475493 podman[79568]: 2025-10-08 09:43:46.451154599 +0000 UTC m=+1.347066484 container remove 9b8f5309c3c1dcce65bbd09bfa5c6a9f90f5273b069857082dea9e78f4613d59 (image=quay.io/ceph/ceph:v19, name=boring_elbakyan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:46 np0005475493 systemd[1]: libpod-conmon-9b8f5309c3c1dcce65bbd09bfa5c6a9f90f5273b069857082dea9e78f4613d59.scope: Deactivated successfully.
Oct  8 05:43:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:43:47 np0005475493 python3[79648]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:43:47 np0005475493 podman[79649]: 2025-10-08 09:43:47.172277449 +0000 UTC m=+0.063376987 container create 1b9fcd02c6ff0f5c565a4f9cc127caea41212f8d0e933e584f3af3354ff17c09 (image=quay.io/ceph/ceph:v19, name=angry_newton, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  8 05:43:47 np0005475493 systemd[1]: Started libpod-conmon-1b9fcd02c6ff0f5c565a4f9cc127caea41212f8d0e933e584f3af3354ff17c09.scope.
Oct  8 05:43:47 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:47 np0005475493 podman[79649]: 2025-10-08 09:43:47.144080073 +0000 UTC m=+0.035179611 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:47 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a8cfa8df5d8caab09bb643968b01c91c169fe0863a650e4c6d7613b77f1cf4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:47 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a8cfa8df5d8caab09bb643968b01c91c169fe0863a650e4c6d7613b77f1cf4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:47 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a8cfa8df5d8caab09bb643968b01c91c169fe0863a650e4c6d7613b77f1cf4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:47 np0005475493 podman[79649]: 2025-10-08 09:43:47.260348773 +0000 UTC m=+0.151448311 container init 1b9fcd02c6ff0f5c565a4f9cc127caea41212f8d0e933e584f3af3354ff17c09 (image=quay.io/ceph/ceph:v19, name=angry_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  8 05:43:47 np0005475493 podman[79649]: 2025-10-08 09:43:47.270992441 +0000 UTC m=+0.162091959 container start 1b9fcd02c6ff0f5c565a4f9cc127caea41212f8d0e933e584f3af3354ff17c09 (image=quay.io/ceph/ceph:v19, name=angry_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  8 05:43:47 np0005475493 podman[79649]: 2025-10-08 09:43:47.275622433 +0000 UTC m=+0.166721951 container attach 1b9fcd02c6ff0f5c565a4f9cc127caea41212f8d0e933e584f3af3354ff17c09 (image=quay.io/ceph/ceph:v19, name=angry_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid)
Oct  8 05:43:47 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/3016004367' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct  8 05:43:47 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 05:43:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct  8 05:43:48 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct  8 05:43:48 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct  8 05:43:48 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct  8 05:43:48 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:48 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Added host compute-0
Oct  8 05:43:48 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Added host compute-0
Oct  8 05:43:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:43:48 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:43:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 05:43:48 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:43:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 05:43:48 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:43:48 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:48 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:48 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:48 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:48 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:43:48 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:43:49 np0005475493 ceph-mon[73572]: Added host compute-0
Oct  8 05:43:49 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Oct  8 05:43:49 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Oct  8 05:43:50 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:43:51 np0005475493 ceph-mon[73572]: Deploying cephadm binary to compute-1
Oct  8 05:43:52 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:43:52 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:43:52 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:43:52 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:43:52 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:43:52 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:43:52 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:43:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:43:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct  8 05:43:53 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:53 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Added host compute-1
Oct  8 05:43:53 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Added host compute-1
Oct  8 05:43:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:43:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:43:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:54 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:43:54 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:54 np0005475493 ceph-mon[73572]: Added host compute-1
Oct  8 05:43:54 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:54 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:54 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Oct  8 05:43:54 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Oct  8 05:43:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:43:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:55 np0005475493 ceph-mon[73572]: Deploying cephadm binary to compute-2
Oct  8 05:43:55 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:56 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:43:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:43:58 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:43:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct  8 05:43:58 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:58 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Added host compute-2
Oct  8 05:43:58 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Added host compute-2
Oct  8 05:43:58 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Oct  8 05:43:58 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Oct  8 05:43:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct  8 05:43:58 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:58 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Oct  8 05:43:58 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Oct  8 05:43:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct  8 05:43:58 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:58 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Oct  8 05:43:58 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Oct  8 05:43:58 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Oct  8 05:43:58 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Oct  8 05:43:58 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Oct  8 05:43:58 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Oct  8 05:43:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Oct  8 05:43:58 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:58 np0005475493 angry_newton[79664]: Added host 'compute-0' with addr '192.168.122.100'
Oct  8 05:43:58 np0005475493 angry_newton[79664]: Added host 'compute-1' with addr '192.168.122.101'
Oct  8 05:43:58 np0005475493 angry_newton[79664]: Added host 'compute-2' with addr '192.168.122.102'
Oct  8 05:43:58 np0005475493 angry_newton[79664]: Scheduled mon update...
Oct  8 05:43:58 np0005475493 angry_newton[79664]: Scheduled mgr update...
Oct  8 05:43:58 np0005475493 angry_newton[79664]: Scheduled osd.default_drive_group update...
Oct  8 05:43:59 np0005475493 systemd[1]: libpod-1b9fcd02c6ff0f5c565a4f9cc127caea41212f8d0e933e584f3af3354ff17c09.scope: Deactivated successfully.
Oct  8 05:43:59 np0005475493 podman[79649]: 2025-10-08 09:43:59.009416569 +0000 UTC m=+11.900516127 container died 1b9fcd02c6ff0f5c565a4f9cc127caea41212f8d0e933e584f3af3354ff17c09 (image=quay.io/ceph/ceph:v19, name=angry_newton, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct  8 05:43:59 np0005475493 systemd[1]: var-lib-containers-storage-overlay-94a8cfa8df5d8caab09bb643968b01c91c169fe0863a650e4c6d7613b77f1cf4-merged.mount: Deactivated successfully.
Oct  8 05:43:59 np0005475493 podman[79649]: 2025-10-08 09:43:59.056935278 +0000 UTC m=+11.948034836 container remove 1b9fcd02c6ff0f5c565a4f9cc127caea41212f8d0e933e584f3af3354ff17c09 (image=quay.io/ceph/ceph:v19, name=angry_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True)
Oct  8 05:43:59 np0005475493 systemd[1]: libpod-conmon-1b9fcd02c6ff0f5c565a4f9cc127caea41212f8d0e933e584f3af3354ff17c09.scope: Deactivated successfully.
Oct  8 05:43:59 np0005475493 python3[79821]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:43:59 np0005475493 podman[79823]: 2025-10-08 09:43:59.606817487 +0000 UTC m=+0.050449700 container create 38f23f457679ff470dd26bb1dfbb84366a95dac34433a7ae6c25139bfe4b5945 (image=quay.io/ceph/ceph:v19, name=hungry_elbakyan, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:43:59 np0005475493 systemd[1]: Started libpod-conmon-38f23f457679ff470dd26bb1dfbb84366a95dac34433a7ae6c25139bfe4b5945.scope.
Oct  8 05:43:59 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:43:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117f16257f3336811983e5370edc7b965984433d64d9eaf16806a44ccc1ed99d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117f16257f3336811983e5370edc7b965984433d64d9eaf16806a44ccc1ed99d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117f16257f3336811983e5370edc7b965984433d64d9eaf16806a44ccc1ed99d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:43:59 np0005475493 podman[79823]: 2025-10-08 09:43:59.58832954 +0000 UTC m=+0.031961773 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:43:59 np0005475493 podman[79823]: 2025-10-08 09:43:59.693564832 +0000 UTC m=+0.137197125 container init 38f23f457679ff470dd26bb1dfbb84366a95dac34433a7ae6c25139bfe4b5945 (image=quay.io/ceph/ceph:v19, name=hungry_elbakyan, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct  8 05:43:59 np0005475493 podman[79823]: 2025-10-08 09:43:59.699797954 +0000 UTC m=+0.143430207 container start 38f23f457679ff470dd26bb1dfbb84366a95dac34433a7ae6c25139bfe4b5945 (image=quay.io/ceph/ceph:v19, name=hungry_elbakyan, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:59 np0005475493 podman[79823]: 2025-10-08 09:43:59.704653422 +0000 UTC m=+0.148285715 container attach 38f23f457679ff470dd26bb1dfbb84366a95dac34433a7ae6c25139bfe4b5945 (image=quay.io/ceph/ceph:v19, name=hungry_elbakyan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:43:59 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:59 np0005475493 ceph-mon[73572]: Added host compute-2
Oct  8 05:43:59 np0005475493 ceph-mon[73572]: Saving service mon spec with placement compute-0;compute-1;compute-2
Oct  8 05:43:59 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:59 np0005475493 ceph-mon[73572]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Oct  8 05:43:59 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:43:59 np0005475493 ceph-mon[73572]: Marking host: compute-0 for OSDSpec preview refresh.
Oct  8 05:43:59 np0005475493 ceph-mon[73572]: Marking host: compute-1 for OSDSpec preview refresh.
Oct  8 05:43:59 np0005475493 ceph-mon[73572]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Oct  8 05:43:59 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct  8 05:44:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2204811910' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  8 05:44:00 np0005475493 hungry_elbakyan[79840]: 
Oct  8 05:44:00 np0005475493 hungry_elbakyan[79840]: {"fsid":"787292cc-8154-50c4-9e00-e9be3e817149","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":56,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-10-08T09:43:01:374245+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-10-08T09:43:01.375926+0000","services":{}},"progress_events":{}}
Oct  8 05:44:00 np0005475493 systemd[1]: libpod-38f23f457679ff470dd26bb1dfbb84366a95dac34433a7ae6c25139bfe4b5945.scope: Deactivated successfully.
Oct  8 05:44:00 np0005475493 podman[79865]: 2025-10-08 09:44:00.161905317 +0000 UTC m=+0.025958769 container died 38f23f457679ff470dd26bb1dfbb84366a95dac34433a7ae6c25139bfe4b5945 (image=quay.io/ceph/ceph:v19, name=hungry_elbakyan, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:44:00 np0005475493 systemd[1]: var-lib-containers-storage-overlay-117f16257f3336811983e5370edc7b965984433d64d9eaf16806a44ccc1ed99d-merged.mount: Deactivated successfully.
Oct  8 05:44:00 np0005475493 podman[79865]: 2025-10-08 09:44:00.192632561 +0000 UTC m=+0.056685943 container remove 38f23f457679ff470dd26bb1dfbb84366a95dac34433a7ae6c25139bfe4b5945 (image=quay.io/ceph/ceph:v19, name=hungry_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True)
Oct  8 05:44:00 np0005475493 systemd[1]: libpod-conmon-38f23f457679ff470dd26bb1dfbb84366a95dac34433a7ae6c25139bfe4b5945.scope: Deactivated successfully.
Oct  8 05:44:00 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:44:02 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:44:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:44:04 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:44:06 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:44:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:44:08 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:44:10 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:44:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:44:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:44:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:44:14 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:44:14 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:44:14 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:44:14 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct  8 05:44:14 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  8 05:44:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:44:14 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:44:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 05:44:14 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:44:14 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct  8 05:44:14 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct  8 05:44:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:44:14 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:44:14 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:44:15 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:15 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:15 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:15 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:15 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  8 05:44:15 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:44:15 np0005475493 ceph-mon[73572]: Updating compute-1:/etc/ceph/ceph.conf
Oct  8 05:44:15 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:44:15 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:44:15 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:44:15 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:44:16 np0005475493 ceph-mon[73572]: Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:44:16 np0005475493 ceph-mon[73572]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:44:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:44:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:44:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 05:44:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:16 np0005475493 ceph-mgr[73869]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct  8 05:44:16 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct  8 05:44:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:44:16 np0005475493 ceph-mgr[73869]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct  8 05:44:16 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct  8 05:44:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:44:16 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev 959ab803-73bd-457e-a384-35c9535dfa13 (Updating crash deployment (+1 -> 2))
Oct  8 05:44:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct  8 05:44:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  8 05:44:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:44:16.291+0000 7fa806647640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Oct  8 05:44:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: service_name: mon
Oct  8 05:44:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: placement:
Oct  8 05:44:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:  hosts:
Oct  8 05:44:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:  - compute-0
Oct  8 05:44:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:  - compute-1
Oct  8 05:44:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:  - compute-2
Oct  8 05:44:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct  8 05:44:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:44:16.292+0000 7fa806647640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Oct  8 05:44:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: service_name: mgr
Oct  8 05:44:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: placement:
Oct  8 05:44:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:  hosts:
Oct  8 05:44:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:  - compute-0
Oct  8 05:44:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:  - compute-1
Oct  8 05:44:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:  - compute-2
Oct  8 05:44:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct  8 05:44:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  8 05:44:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:44:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:44:16 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Oct  8 05:44:16 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Oct  8 05:44:17 np0005475493 ceph-mon[73572]: Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:44:17 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:17 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:17 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:17 np0005475493 ceph-mon[73572]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct  8 05:44:17 np0005475493 ceph-mon[73572]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct  8 05:44:17 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  8 05:44:17 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  8 05:44:17 np0005475493 ceph-mon[73572]: Deploying daemon crash.compute-1 on compute-1
Oct  8 05:44:17 np0005475493 ceph-mon[73572]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Oct  8 05:44:18 np0005475493 ceph-mon[73572]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Oct  8 05:44:18 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:44:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:44:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:44:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:44:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct  8 05:44:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:18 np0005475493 ceph-mgr[73869]: [progress INFO root] complete: finished ev 959ab803-73bd-457e-a384-35c9535dfa13 (Updating crash deployment (+1 -> 2))
Oct  8 05:44:18 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event 959ab803-73bd-457e-a384-35c9535dfa13 (Updating crash deployment (+1 -> 2)) in 2 seconds
Oct  8 05:44:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct  8 05:44:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 05:44:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 05:44:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 05:44:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:44:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:44:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:44:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 05:44:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:44:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:44:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:44:19 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:19 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:19 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:19 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:19 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:44:19 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:44:19 np0005475493 podman[79970]: 2025-10-08 09:44:19.314532608 +0000 UTC m=+0.062631645 container create 2f59882b0f59ce938913c163f893ed683146fdacb14c245683cc9eb9f1fe1f62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  8 05:44:19 np0005475493 systemd[1]: Started libpod-conmon-2f59882b0f59ce938913c163f893ed683146fdacb14c245683cc9eb9f1fe1f62.scope.
Oct  8 05:44:19 np0005475493 podman[79970]: 2025-10-08 09:44:19.289147098 +0000 UTC m=+0.037246115 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:44:19 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:44:19 np0005475493 podman[79970]: 2025-10-08 09:44:19.40350614 +0000 UTC m=+0.151605237 container init 2f59882b0f59ce938913c163f893ed683146fdacb14c245683cc9eb9f1fe1f62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hugle, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  8 05:44:19 np0005475493 podman[79970]: 2025-10-08 09:44:19.409786864 +0000 UTC m=+0.157885871 container start 2f59882b0f59ce938913c163f893ed683146fdacb14c245683cc9eb9f1fe1f62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct  8 05:44:19 np0005475493 podman[79970]: 2025-10-08 09:44:19.413639781 +0000 UTC m=+0.161738778 container attach 2f59882b0f59ce938913c163f893ed683146fdacb14c245683cc9eb9f1fe1f62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hugle, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:44:19 np0005475493 laughing_hugle[79986]: 167 167
Oct  8 05:44:19 np0005475493 systemd[1]: libpod-2f59882b0f59ce938913c163f893ed683146fdacb14c245683cc9eb9f1fe1f62.scope: Deactivated successfully.
Oct  8 05:44:19 np0005475493 podman[79970]: 2025-10-08 09:44:19.418239733 +0000 UTC m=+0.166338730 container died 2f59882b0f59ce938913c163f893ed683146fdacb14c245683cc9eb9f1fe1f62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hugle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:44:19 np0005475493 systemd[1]: var-lib-containers-storage-overlay-30c3fd269ff53e7accbd49fbe6cd2022214fbbce6383cfbcae05585c06b1ba98-merged.mount: Deactivated successfully.
Oct  8 05:44:19 np0005475493 podman[79970]: 2025-10-08 09:44:19.457009804 +0000 UTC m=+0.205108801 container remove 2f59882b0f59ce938913c163f893ed683146fdacb14c245683cc9eb9f1fe1f62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hugle, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  8 05:44:19 np0005475493 systemd[1]: libpod-conmon-2f59882b0f59ce938913c163f893ed683146fdacb14c245683cc9eb9f1fe1f62.scope: Deactivated successfully.
Oct  8 05:44:19 np0005475493 podman[80009]: 2025-10-08 09:44:19.644289696 +0000 UTC m=+0.040689041 container create 35a5936dc2a6f8d35c4b48fc7d5dd1a62fc46a948e13f8a79b84dfe0328b76fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:44:19 np0005475493 systemd[1]: Started libpod-conmon-35a5936dc2a6f8d35c4b48fc7d5dd1a62fc46a948e13f8a79b84dfe0328b76fa.scope.
Oct  8 05:44:19 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:44:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6124af7819ea7b990535e4686b9bb97a999d82404ad96c5eac3c4efccbd160ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6124af7819ea7b990535e4686b9bb97a999d82404ad96c5eac3c4efccbd160ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6124af7819ea7b990535e4686b9bb97a999d82404ad96c5eac3c4efccbd160ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6124af7819ea7b990535e4686b9bb97a999d82404ad96c5eac3c4efccbd160ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6124af7819ea7b990535e4686b9bb97a999d82404ad96c5eac3c4efccbd160ed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:19 np0005475493 podman[80009]: 2025-10-08 09:44:19.705661111 +0000 UTC m=+0.102060436 container init 35a5936dc2a6f8d35c4b48fc7d5dd1a62fc46a948e13f8a79b84dfe0328b76fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_noether, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct  8 05:44:19 np0005475493 podman[80009]: 2025-10-08 09:44:19.714185963 +0000 UTC m=+0.110585288 container start 35a5936dc2a6f8d35c4b48fc7d5dd1a62fc46a948e13f8a79b84dfe0328b76fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_noether, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  8 05:44:19 np0005475493 podman[80009]: 2025-10-08 09:44:19.71830292 +0000 UTC m=+0.114702245 container attach 35a5936dc2a6f8d35c4b48fc7d5dd1a62fc46a948e13f8a79b84dfe0328b76fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_noether, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  8 05:44:19 np0005475493 podman[80009]: 2025-10-08 09:44:19.624917551 +0000 UTC m=+0.021316906 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:44:20 np0005475493 funny_noether[80026]: --> passed data devices: 0 physical, 1 LVM
Oct  8 05:44:20 np0005475493 funny_noether[80026]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  8 05:44:20 np0005475493 funny_noether[80026]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  8 05:44:20 np0005475493 funny_noether[80026]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 85fe3e7b-5e0f-4a19-934c-310215b2e933
Oct  8 05:44:20 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:44:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "4c1bfacb-a774-41da-9670-a649dcd6f8d0"} v 0)
Oct  8 05:44:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2322205066' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4c1bfacb-a774-41da-9670-a649dcd6f8d0"}]: dispatch
Oct  8 05:44:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Oct  8 05:44:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  8 05:44:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2322205066' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4c1bfacb-a774-41da-9670-a649dcd6f8d0"}]': finished
Oct  8 05:44:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Oct  8 05:44:20 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Oct  8 05:44:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  8 05:44:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  8 05:44:20 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  8 05:44:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "85fe3e7b-5e0f-4a19-934c-310215b2e933"} v 0)
Oct  8 05:44:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1515855431' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "85fe3e7b-5e0f-4a19-934c-310215b2e933"}]: dispatch
Oct  8 05:44:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Oct  8 05:44:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  8 05:44:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1515855431' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "85fe3e7b-5e0f-4a19-934c-310215b2e933"}]': finished
Oct  8 05:44:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Oct  8 05:44:20 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Oct  8 05:44:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  8 05:44:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  8 05:44:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  8 05:44:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  8 05:44:20 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  8 05:44:20 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  8 05:44:20 np0005475493 funny_noether[80026]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Oct  8 05:44:20 np0005475493 funny_noether[80026]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct  8 05:44:20 np0005475493 lvm[80087]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 05:44:20 np0005475493 lvm[80087]: VG ceph_vg0 finished
Oct  8 05:44:20 np0005475493 funny_noether[80026]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  8 05:44:20 np0005475493 funny_noether[80026]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct  8 05:44:20 np0005475493 funny_noether[80026]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Oct  8 05:44:21 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Oct  8 05:44:21 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4120750441' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct  8 05:44:21 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Oct  8 05:44:21 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3796036156' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct  8 05:44:21 np0005475493 funny_noether[80026]: stderr: got monmap epoch 1
Oct  8 05:44:21 np0005475493 funny_noether[80026]: --> Creating keyring file for osd.1
Oct  8 05:44:21 np0005475493 funny_noether[80026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Oct  8 05:44:21 np0005475493 funny_noether[80026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Oct  8 05:44:21 np0005475493 funny_noether[80026]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 85fe3e7b-5e0f-4a19-934c-310215b2e933 --setuser ceph --setgroup ceph
Oct  8 05:44:21 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.101:0/2322205066' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4c1bfacb-a774-41da-9670-a649dcd6f8d0"}]: dispatch
Oct  8 05:44:21 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.101:0/2322205066' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4c1bfacb-a774-41da-9670-a649dcd6f8d0"}]': finished
Oct  8 05:44:21 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/1515855431' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "85fe3e7b-5e0f-4a19-934c-310215b2e933"}]: dispatch
Oct  8 05:44:21 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/1515855431' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "85fe3e7b-5e0f-4a19-934c-310215b2e933"}]': finished
Oct  8 05:44:22 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:44:22 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct  8 05:44:22 np0005475493 ceph-mon[73572]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct  8 05:44:22 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:44:22
Oct  8 05:44:22 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 05:44:22 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 05:44:22 np0005475493 ceph-mgr[73869]: [balancer INFO root] No pools available
Oct  8 05:44:22 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 05:44:22 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 05:44:22 np0005475493 ceph-mgr[73869]: [progress INFO root] Writing back 2 completed events
Oct  8 05:44:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  8 05:44:22 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:44:22 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:44:22 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 05:44:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:22 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:44:22 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:44:22 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:44:22 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:44:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:44:23 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:23 np0005475493 funny_noether[80026]: stderr: 2025-10-08T09:44:21.147+0000 7f7a2550b740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Oct  8 05:44:23 np0005475493 funny_noether[80026]: stderr: 2025-10-08T09:44:21.409+0000 7f7a2550b740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Oct  8 05:44:23 np0005475493 funny_noether[80026]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct  8 05:44:23 np0005475493 funny_noether[80026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  8 05:44:23 np0005475493 funny_noether[80026]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct  8 05:44:24 np0005475493 funny_noether[80026]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct  8 05:44:24 np0005475493 funny_noether[80026]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct  8 05:44:24 np0005475493 funny_noether[80026]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  8 05:44:24 np0005475493 funny_noether[80026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  8 05:44:24 np0005475493 funny_noether[80026]: --> ceph-volume lvm activate successful for osd ID: 1
Oct  8 05:44:24 np0005475493 funny_noether[80026]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct  8 05:44:24 np0005475493 systemd[1]: libpod-35a5936dc2a6f8d35c4b48fc7d5dd1a62fc46a948e13f8a79b84dfe0328b76fa.scope: Deactivated successfully.
Oct  8 05:44:24 np0005475493 systemd[1]: libpod-35a5936dc2a6f8d35c4b48fc7d5dd1a62fc46a948e13f8a79b84dfe0328b76fa.scope: Consumed 2.000s CPU time.
Oct  8 05:44:24 np0005475493 podman[80009]: 2025-10-08 09:44:24.194099521 +0000 UTC m=+4.590498846 container died 35a5936dc2a6f8d35c4b48fc7d5dd1a62fc46a948e13f8a79b84dfe0328b76fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_noether, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:44:24 np0005475493 systemd[1]: var-lib-containers-storage-overlay-6124af7819ea7b990535e4686b9bb97a999d82404ad96c5eac3c4efccbd160ed-merged.mount: Deactivated successfully.
Oct  8 05:44:24 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:44:24 np0005475493 podman[80009]: 2025-10-08 09:44:24.320189814 +0000 UTC m=+4.716589129 container remove 35a5936dc2a6f8d35c4b48fc7d5dd1a62fc46a948e13f8a79b84dfe0328b76fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_noether, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:44:24 np0005475493 systemd[1]: libpod-conmon-35a5936dc2a6f8d35c4b48fc7d5dd1a62fc46a948e13f8a79b84dfe0328b76fa.scope: Deactivated successfully.
Oct  8 05:44:24 np0005475493 podman[81096]: 2025-10-08 09:44:24.883022731 +0000 UTC m=+0.035488542 container create 62dace7d2cde38cd87e8969c0b5fcee079b5e8ae48579004b674415dca8e9b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jepsen, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct  8 05:44:24 np0005475493 systemd[1]: Started libpod-conmon-62dace7d2cde38cd87e8969c0b5fcee079b5e8ae48579004b674415dca8e9b85.scope.
Oct  8 05:44:24 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:44:24 np0005475493 podman[81096]: 2025-10-08 09:44:24.868267747 +0000 UTC m=+0.020733578 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:44:24 np0005475493 podman[81096]: 2025-10-08 09:44:24.966320029 +0000 UTC m=+0.118785870 container init 62dace7d2cde38cd87e8969c0b5fcee079b5e8ae48579004b674415dca8e9b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct  8 05:44:24 np0005475493 podman[81096]: 2025-10-08 09:44:24.974772729 +0000 UTC m=+0.127238540 container start 62dace7d2cde38cd87e8969c0b5fcee079b5e8ae48579004b674415dca8e9b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jepsen, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:44:24 np0005475493 podman[81096]: 2025-10-08 09:44:24.977692778 +0000 UTC m=+0.130158589 container attach 62dace7d2cde38cd87e8969c0b5fcee079b5e8ae48579004b674415dca8e9b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct  8 05:44:24 np0005475493 quirky_jepsen[81113]: 167 167
Oct  8 05:44:24 np0005475493 systemd[1]: libpod-62dace7d2cde38cd87e8969c0b5fcee079b5e8ae48579004b674415dca8e9b85.scope: Deactivated successfully.
Oct  8 05:44:24 np0005475493 podman[81096]: 2025-10-08 09:44:24.980435833 +0000 UTC m=+0.132901654 container died 62dace7d2cde38cd87e8969c0b5fcee079b5e8ae48579004b674415dca8e9b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jepsen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct  8 05:44:25 np0005475493 systemd[1]: var-lib-containers-storage-overlay-1a3fb35d8c1694bc5513e8969e289cfb01c6ba03f5104e3279a3a4cbdc55aa58-merged.mount: Deactivated successfully.
Oct  8 05:44:25 np0005475493 podman[81096]: 2025-10-08 09:44:25.017544763 +0000 UTC m=+0.170010614 container remove 62dace7d2cde38cd87e8969c0b5fcee079b5e8ae48579004b674415dca8e9b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:44:25 np0005475493 systemd[1]: libpod-conmon-62dace7d2cde38cd87e8969c0b5fcee079b5e8ae48579004b674415dca8e9b85.scope: Deactivated successfully.
Oct  8 05:44:25 np0005475493 podman[81137]: 2025-10-08 09:44:25.222224309 +0000 UTC m=+0.053637508 container create 8b458d804e25d65a5d2f0daddd57607e05a74c35152c08cda0cb6b7d94c0f62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_boyd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  8 05:44:25 np0005475493 systemd[1]: Started libpod-conmon-8b458d804e25d65a5d2f0daddd57607e05a74c35152c08cda0cb6b7d94c0f62e.scope.
Oct  8 05:44:25 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:44:25 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53bc24bd1ea4b2dd7a1b09712933b408d3edf565bc947b7d935d5006f2afbc16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:25 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53bc24bd1ea4b2dd7a1b09712933b408d3edf565bc947b7d935d5006f2afbc16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:25 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53bc24bd1ea4b2dd7a1b09712933b408d3edf565bc947b7d935d5006f2afbc16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:25 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53bc24bd1ea4b2dd7a1b09712933b408d3edf565bc947b7d935d5006f2afbc16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:25 np0005475493 podman[81137]: 2025-10-08 09:44:25.200354687 +0000 UTC m=+0.031767886 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:44:25 np0005475493 podman[81137]: 2025-10-08 09:44:25.307523249 +0000 UTC m=+0.138936418 container init 8b458d804e25d65a5d2f0daddd57607e05a74c35152c08cda0cb6b7d94c0f62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  8 05:44:25 np0005475493 podman[81137]: 2025-10-08 09:44:25.313120211 +0000 UTC m=+0.144533410 container start 8b458d804e25d65a5d2f0daddd57607e05a74c35152c08cda0cb6b7d94c0f62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_boyd, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:44:25 np0005475493 podman[81137]: 2025-10-08 09:44:25.31667577 +0000 UTC m=+0.148089009 container attach 8b458d804e25d65a5d2f0daddd57607e05a74c35152c08cda0cb6b7d94c0f62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  8 05:44:25 np0005475493 happy_boyd[81153]: {
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:    "1": [
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:        {
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:            "devices": [
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:                "/dev/loop3"
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:            ],
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:            "lv_name": "ceph_lv0",
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:            "lv_size": "21470642176",
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:            "name": "ceph_lv0",
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:            "tags": {
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:                "ceph.cephx_lockbox_secret": "",
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:                "ceph.cluster_name": "ceph",
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:                "ceph.crush_device_class": "",
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:                "ceph.encrypted": "0",
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:                "ceph.osd_id": "1",
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:                "ceph.type": "block",
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:                "ceph.vdo": "0",
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:                "ceph.with_tpm": "0"
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:            },
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:            "type": "block",
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:            "vg_name": "ceph_vg0"
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:        }
Oct  8 05:44:25 np0005475493 happy_boyd[81153]:    ]
Oct  8 05:44:25 np0005475493 happy_boyd[81153]: }
Oct  8 05:44:25 np0005475493 systemd[1]: libpod-8b458d804e25d65a5d2f0daddd57607e05a74c35152c08cda0cb6b7d94c0f62e.scope: Deactivated successfully.
Oct  8 05:44:25 np0005475493 podman[81137]: 2025-10-08 09:44:25.626702202 +0000 UTC m=+0.458115391 container died 8b458d804e25d65a5d2f0daddd57607e05a74c35152c08cda0cb6b7d94c0f62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  8 05:44:25 np0005475493 systemd[1]: var-lib-containers-storage-overlay-53bc24bd1ea4b2dd7a1b09712933b408d3edf565bc947b7d935d5006f2afbc16-merged.mount: Deactivated successfully.
Oct  8 05:44:25 np0005475493 podman[81137]: 2025-10-08 09:44:25.68001807 +0000 UTC m=+0.511431239 container remove 8b458d804e25d65a5d2f0daddd57607e05a74c35152c08cda0cb6b7d94c0f62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_boyd, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:44:25 np0005475493 systemd[1]: libpod-conmon-8b458d804e25d65a5d2f0daddd57607e05a74c35152c08cda0cb6b7d94c0f62e.scope: Deactivated successfully.
Oct  8 05:44:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Oct  8 05:44:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  8 05:44:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:44:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:44:25 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Oct  8 05:44:25 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Oct  8 05:44:26 np0005475493 podman[81263]: 2025-10-08 09:44:26.265251305 +0000 UTC m=+0.042025142 container create 8743eb1f0ac2df5abb65a54a4b666efca15f18be3f20e8bba97854e361c017ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_gagarin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:44:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:44:26 np0005475493 systemd[1]: Started libpod-conmon-8743eb1f0ac2df5abb65a54a4b666efca15f18be3f20e8bba97854e361c017ce.scope.
Oct  8 05:44:26 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:44:26 np0005475493 podman[81263]: 2025-10-08 09:44:26.341119145 +0000 UTC m=+0.117893002 container init 8743eb1f0ac2df5abb65a54a4b666efca15f18be3f20e8bba97854e361c017ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_gagarin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  8 05:44:26 np0005475493 podman[81263]: 2025-10-08 09:44:26.248382146 +0000 UTC m=+0.025156003 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:44:26 np0005475493 podman[81263]: 2025-10-08 09:44:26.347870252 +0000 UTC m=+0.124644079 container start 8743eb1f0ac2df5abb65a54a4b666efca15f18be3f20e8bba97854e361c017ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_gagarin, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:44:26 np0005475493 podman[81263]: 2025-10-08 09:44:26.351048329 +0000 UTC m=+0.127822196 container attach 8743eb1f0ac2df5abb65a54a4b666efca15f18be3f20e8bba97854e361c017ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_gagarin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:44:26 np0005475493 goofy_gagarin[81280]: 167 167
Oct  8 05:44:26 np0005475493 systemd[1]: libpod-8743eb1f0ac2df5abb65a54a4b666efca15f18be3f20e8bba97854e361c017ce.scope: Deactivated successfully.
Oct  8 05:44:26 np0005475493 conmon[81280]: conmon 8743eb1f0ac2df5abb65 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8743eb1f0ac2df5abb65a54a4b666efca15f18be3f20e8bba97854e361c017ce.scope/container/memory.events
Oct  8 05:44:26 np0005475493 podman[81263]: 2025-10-08 09:44:26.352581807 +0000 UTC m=+0.129355634 container died 8743eb1f0ac2df5abb65a54a4b666efca15f18be3f20e8bba97854e361c017ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  8 05:44:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Oct  8 05:44:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  8 05:44:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:44:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:44:26 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-1
Oct  8 05:44:26 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-1
Oct  8 05:44:26 np0005475493 systemd[1]: var-lib-containers-storage-overlay-acb57bd8fdc8eb223ca11fecd1ff4250a3d305d5b50410a827e0588dd78d8e28-merged.mount: Deactivated successfully.
Oct  8 05:44:26 np0005475493 podman[81263]: 2025-10-08 09:44:26.384448615 +0000 UTC m=+0.161222442 container remove 8743eb1f0ac2df5abb65a54a4b666efca15f18be3f20e8bba97854e361c017ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_gagarin, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  8 05:44:26 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  8 05:44:26 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  8 05:44:26 np0005475493 systemd[1]: libpod-conmon-8743eb1f0ac2df5abb65a54a4b666efca15f18be3f20e8bba97854e361c017ce.scope: Deactivated successfully.
Oct  8 05:44:26 np0005475493 podman[81309]: 2025-10-08 09:44:26.665063524 +0000 UTC m=+0.044596720 container create ec3e73eb84d5ef78be6221f5194046ca6d05f340442e8d023bb0c4e8c2d7016b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate-test, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:44:26 np0005475493 systemd[1]: Started libpod-conmon-ec3e73eb84d5ef78be6221f5194046ca6d05f340442e8d023bb0c4e8c2d7016b.scope.
Oct  8 05:44:26 np0005475493 podman[81309]: 2025-10-08 09:44:26.641657746 +0000 UTC m=+0.021190942 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:44:26 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:44:26 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46f18965360352602a8510ec931733efae88b6c6a6d2a16de5eddb55ebfacd35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:26 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46f18965360352602a8510ec931733efae88b6c6a6d2a16de5eddb55ebfacd35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:26 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46f18965360352602a8510ec931733efae88b6c6a6d2a16de5eddb55ebfacd35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:26 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46f18965360352602a8510ec931733efae88b6c6a6d2a16de5eddb55ebfacd35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:26 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46f18965360352602a8510ec931733efae88b6c6a6d2a16de5eddb55ebfacd35/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:26 np0005475493 podman[81309]: 2025-10-08 09:44:26.75898695 +0000 UTC m=+0.138520126 container init ec3e73eb84d5ef78be6221f5194046ca6d05f340442e8d023bb0c4e8c2d7016b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate-test, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct  8 05:44:26 np0005475493 podman[81309]: 2025-10-08 09:44:26.767834461 +0000 UTC m=+0.147367657 container start ec3e73eb84d5ef78be6221f5194046ca6d05f340442e8d023bb0c4e8c2d7016b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  8 05:44:26 np0005475493 podman[81309]: 2025-10-08 09:44:26.771243386 +0000 UTC m=+0.150776552 container attach ec3e73eb84d5ef78be6221f5194046ca6d05f340442e8d023bb0c4e8c2d7016b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate-test, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:44:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate-test[81326]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Oct  8 05:44:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate-test[81326]:                            [--no-systemd] [--no-tmpfs]
Oct  8 05:44:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate-test[81326]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct  8 05:44:26 np0005475493 systemd[1]: libpod-ec3e73eb84d5ef78be6221f5194046ca6d05f340442e8d023bb0c4e8c2d7016b.scope: Deactivated successfully.
Oct  8 05:44:26 np0005475493 podman[81309]: 2025-10-08 09:44:26.937451681 +0000 UTC m=+0.316984837 container died ec3e73eb84d5ef78be6221f5194046ca6d05f340442e8d023bb0c4e8c2d7016b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate-test, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:44:26 np0005475493 systemd[1]: var-lib-containers-storage-overlay-46f18965360352602a8510ec931733efae88b6c6a6d2a16de5eddb55ebfacd35-merged.mount: Deactivated successfully.
Oct  8 05:44:26 np0005475493 podman[81309]: 2025-10-08 09:44:26.985570219 +0000 UTC m=+0.365103375 container remove ec3e73eb84d5ef78be6221f5194046ca6d05f340442e8d023bb0c4e8c2d7016b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  8 05:44:26 np0005475493 systemd[1]: libpod-conmon-ec3e73eb84d5ef78be6221f5194046ca6d05f340442e8d023bb0c4e8c2d7016b.scope: Deactivated successfully.
Oct  8 05:44:27 np0005475493 systemd[1]: Reloading.
Oct  8 05:44:27 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:44:27 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:44:27 np0005475493 ceph-mon[73572]: Deploying daemon osd.1 on compute-0
Oct  8 05:44:27 np0005475493 ceph-mon[73572]: Deploying daemon osd.0 on compute-1
Oct  8 05:44:27 np0005475493 systemd[1]: Reloading.
Oct  8 05:44:27 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:44:27 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:44:27 np0005475493 systemd[1]: Starting Ceph osd.1 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:44:28 np0005475493 podman[81484]: 2025-10-08 09:44:28.038944332 +0000 UTC m=+0.039761881 container create 0215da1036d559617466acc3522c3d2170d4960f1c9a8dac01252aa6de561452 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  8 05:44:28 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:44:28 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49b957306e541b17dea64faece7a941ec8b60bc2c382f0551edaff7b9a0e2c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:28 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49b957306e541b17dea64faece7a941ec8b60bc2c382f0551edaff7b9a0e2c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:28 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49b957306e541b17dea64faece7a941ec8b60bc2c382f0551edaff7b9a0e2c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:28 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49b957306e541b17dea64faece7a941ec8b60bc2c382f0551edaff7b9a0e2c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:28 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49b957306e541b17dea64faece7a941ec8b60bc2c382f0551edaff7b9a0e2c4/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:28 np0005475493 podman[81484]: 2025-10-08 09:44:28.109258432 +0000 UTC m=+0.110076011 container init 0215da1036d559617466acc3522c3d2170d4960f1c9a8dac01252aa6de561452 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  8 05:44:28 np0005475493 podman[81484]: 2025-10-08 09:44:28.114642587 +0000 UTC m=+0.115460136 container start 0215da1036d559617466acc3522c3d2170d4960f1c9a8dac01252aa6de561452 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  8 05:44:28 np0005475493 podman[81484]: 2025-10-08 09:44:28.021066194 +0000 UTC m=+0.021883793 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:44:28 np0005475493 podman[81484]: 2025-10-08 09:44:28.126346857 +0000 UTC m=+0.127164406 container attach 0215da1036d559617466acc3522c3d2170d4960f1c9a8dac01252aa6de561452 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:44:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate[81500]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  8 05:44:28 np0005475493 bash[81484]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  8 05:44:28 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:44:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate[81500]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  8 05:44:28 np0005475493 bash[81484]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  8 05:44:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:44:28 np0005475493 lvm[81582]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 05:44:28 np0005475493 lvm[81582]: VG ceph_vg0 finished
Oct  8 05:44:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate[81500]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct  8 05:44:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate[81500]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  8 05:44:28 np0005475493 bash[81484]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct  8 05:44:28 np0005475493 bash[81484]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  8 05:44:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate[81500]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  8 05:44:28 np0005475493 bash[81484]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  8 05:44:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate[81500]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  8 05:44:28 np0005475493 bash[81484]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  8 05:44:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate[81500]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct  8 05:44:28 np0005475493 bash[81484]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct  8 05:44:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate[81500]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct  8 05:44:29 np0005475493 bash[81484]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct  8 05:44:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate[81500]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct  8 05:44:29 np0005475493 bash[81484]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct  8 05:44:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate[81500]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  8 05:44:29 np0005475493 bash[81484]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  8 05:44:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate[81500]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  8 05:44:29 np0005475493 bash[81484]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  8 05:44:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate[81500]: --> ceph-volume lvm activate successful for osd ID: 1
Oct  8 05:44:29 np0005475493 bash[81484]: --> ceph-volume lvm activate successful for osd ID: 1
Oct  8 05:44:29 np0005475493 systemd[1]: libpod-0215da1036d559617466acc3522c3d2170d4960f1c9a8dac01252aa6de561452.scope: Deactivated successfully.
Oct  8 05:44:29 np0005475493 systemd[1]: libpod-0215da1036d559617466acc3522c3d2170d4960f1c9a8dac01252aa6de561452.scope: Consumed 1.406s CPU time.
Oct  8 05:44:29 np0005475493 podman[81484]: 2025-10-08 09:44:29.392273489 +0000 UTC m=+1.393091078 container died 0215da1036d559617466acc3522c3d2170d4960f1c9a8dac01252aa6de561452 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:44:29 np0005475493 systemd[1]: var-lib-containers-storage-overlay-b49b957306e541b17dea64faece7a941ec8b60bc2c382f0551edaff7b9a0e2c4-merged.mount: Deactivated successfully.
Oct  8 05:44:29 np0005475493 podman[81484]: 2025-10-08 09:44:29.453884901 +0000 UTC m=+1.454702490 container remove 0215da1036d559617466acc3522c3d2170d4960f1c9a8dac01252aa6de561452 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1-activate, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:44:29 np0005475493 podman[81732]: 2025-10-08 09:44:29.681789932 +0000 UTC m=+0.056053593 container create 7ace3f50e48c85dfbeac24b6a9c8de138ec140013d8daa3351908e2ceb79b4c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:44:29 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9f70dbeabb77e0e5f55a1a90bae9d7c73b770d7aa16c3b0f593a39c496f154/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:29 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9f70dbeabb77e0e5f55a1a90bae9d7c73b770d7aa16c3b0f593a39c496f154/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:29 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9f70dbeabb77e0e5f55a1a90bae9d7c73b770d7aa16c3b0f593a39c496f154/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:29 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9f70dbeabb77e0e5f55a1a90bae9d7c73b770d7aa16c3b0f593a39c496f154/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:29 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9f70dbeabb77e0e5f55a1a90bae9d7c73b770d7aa16c3b0f593a39c496f154/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:29 np0005475493 podman[81732]: 2025-10-08 09:44:29.738657998 +0000 UTC m=+0.112921679 container init 7ace3f50e48c85dfbeac24b6a9c8de138ec140013d8daa3351908e2ceb79b4c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct  8 05:44:29 np0005475493 podman[81732]: 2025-10-08 09:44:29.654772691 +0000 UTC m=+0.029036422 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:44:29 np0005475493 podman[81732]: 2025-10-08 09:44:29.751334808 +0000 UTC m=+0.125598459 container start 7ace3f50e48c85dfbeac24b6a9c8de138ec140013d8daa3351908e2ceb79b4c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  8 05:44:29 np0005475493 bash[81732]: 7ace3f50e48c85dfbeac24b6a9c8de138ec140013d8daa3351908e2ceb79b4c2
Oct  8 05:44:29 np0005475493 systemd[1]: Started Ceph osd.1 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:44:29 np0005475493 ceph-osd[81751]: set uid:gid to 167:167 (ceph:ceph)
Oct  8 05:44:29 np0005475493 ceph-osd[81751]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Oct  8 05:44:29 np0005475493 ceph-osd[81751]: pidfile_write: ignore empty --pid-file
Oct  8 05:44:29 np0005475493 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  8 05:44:29 np0005475493 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  8 05:44:29 np0005475493 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  8 05:44:29 np0005475493 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) close
Oct  8 05:44:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:44:29 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:44:29 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) close
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) close
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) close
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) close
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f28f21c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f28f21c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f28f21c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f28f21c00 /var/lib/ceph/osd/ceph-1/block) close
Oct  8 05:44:30 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:44:30 np0005475493 podman[81874]: 2025-10-08 09:44:30.33518262 +0000 UTC m=+0.039238566 container create 415d3cdec75380d9adcbfab0184402fba6e8b694ac86d8a8cbfc307409f00d2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_albattani, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:44:30 np0005475493 systemd[1]: Started libpod-conmon-415d3cdec75380d9adcbfab0184402fba6e8b694ac86d8a8cbfc307409f00d2a.scope.
Oct  8 05:44:30 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:44:30 np0005475493 podman[81874]: 2025-10-08 09:44:30.413430153 +0000 UTC m=+0.117486109 container init 415d3cdec75380d9adcbfab0184402fba6e8b694ac86d8a8cbfc307409f00d2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_albattani, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:44:30 np0005475493 podman[81874]: 2025-10-08 09:44:30.318867449 +0000 UTC m=+0.022923415 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:44:30 np0005475493 podman[81874]: 2025-10-08 09:44:30.420079548 +0000 UTC m=+0.124135494 container start 415d3cdec75380d9adcbfab0184402fba6e8b694ac86d8a8cbfc307409f00d2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_albattani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:44:30 np0005475493 podman[81874]: 2025-10-08 09:44:30.422986897 +0000 UTC m=+0.127042843 container attach 415d3cdec75380d9adcbfab0184402fba6e8b694ac86d8a8cbfc307409f00d2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  8 05:44:30 np0005475493 charming_albattani[81904]: 167 167
Oct  8 05:44:30 np0005475493 systemd[1]: libpod-415d3cdec75380d9adcbfab0184402fba6e8b694ac86d8a8cbfc307409f00d2a.scope: Deactivated successfully.
Oct  8 05:44:30 np0005475493 podman[81874]: 2025-10-08 09:44:30.427385462 +0000 UTC m=+0.131441418 container died 415d3cdec75380d9adcbfab0184402fba6e8b694ac86d8a8cbfc307409f00d2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  8 05:44:30 np0005475493 systemd[1]: var-lib-containers-storage-overlay-c50712efa9855408ed3ff6e736fb635510267289aa7a63e6a482701caff1000a-merged.mount: Deactivated successfully.
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f28f21800 /var/lib/ceph/osd/ceph-1/block) close
Oct  8 05:44:30 np0005475493 podman[81874]: 2025-10-08 09:44:30.465252985 +0000 UTC m=+0.169308931 container remove 415d3cdec75380d9adcbfab0184402fba6e8b694ac86d8a8cbfc307409f00d2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  8 05:44:30 np0005475493 python3[81901]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:44:30 np0005475493 systemd[1]: libpod-conmon-415d3cdec75380d9adcbfab0184402fba6e8b694ac86d8a8cbfc307409f00d2a.scope: Deactivated successfully.
Oct  8 05:44:30 np0005475493 podman[81925]: 2025-10-08 09:44:30.525132034 +0000 UTC m=+0.040500405 container create d07920120f5b80baec01ae82ad87c69dfaadb3cbf8c896183a73604ebdbb84b4 (image=quay.io/ceph/ceph:v19, name=bold_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct  8 05:44:30 np0005475493 systemd[1]: Started libpod-conmon-d07920120f5b80baec01ae82ad87c69dfaadb3cbf8c896183a73604ebdbb84b4.scope.
Oct  8 05:44:30 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:44:30 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe875516d91d975b32dbad68a0295fd211e917c03c1f7b1689e524b5990f6600/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:30 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe875516d91d975b32dbad68a0295fd211e917c03c1f7b1689e524b5990f6600/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:30 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe875516d91d975b32dbad68a0295fd211e917c03c1f7b1689e524b5990f6600/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:30 np0005475493 podman[81925]: 2025-10-08 09:44:30.506464571 +0000 UTC m=+0.021832972 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:44:30 np0005475493 podman[81925]: 2025-10-08 09:44:30.611711663 +0000 UTC m=+0.127080054 container init d07920120f5b80baec01ae82ad87c69dfaadb3cbf8c896183a73604ebdbb84b4 (image=quay.io/ceph/ceph:v19, name=bold_wing, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:44:30 np0005475493 podman[81925]: 2025-10-08 09:44:30.617271554 +0000 UTC m=+0.132639925 container start d07920120f5b80baec01ae82ad87c69dfaadb3cbf8c896183a73604ebdbb84b4 (image=quay.io/ceph/ceph:v19, name=bold_wing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  8 05:44:30 np0005475493 podman[81925]: 2025-10-08 09:44:30.630418278 +0000 UTC m=+0.145786659 container attach d07920120f5b80baec01ae82ad87c69dfaadb3cbf8c896183a73604ebdbb84b4 (image=quay.io/ceph/ceph:v19, name=bold_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:44:30 np0005475493 podman[81950]: 2025-10-08 09:44:30.643997095 +0000 UTC m=+0.055796954 container create 1df969b0c50f9d66c35a75aa121c358ea471ac1f87acb1ba182e9d4b65c45859 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True)
Oct  8 05:44:30 np0005475493 systemd[1]: Started libpod-conmon-1df969b0c50f9d66c35a75aa121c358ea471ac1f87acb1ba182e9d4b65c45859.scope.
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Oct  8 05:44:30 np0005475493 podman[81950]: 2025-10-08 09:44:30.625203188 +0000 UTC m=+0.037003067 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:44:30 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:44:30 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67337a3998292ee11e76d56a2d3288db98d1f5bd28c97ec2e33d8056edeb96df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: load: jerasure load: lrc 
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) close
Oct  8 05:44:30 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67337a3998292ee11e76d56a2d3288db98d1f5bd28c97ec2e33d8056edeb96df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:30 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67337a3998292ee11e76d56a2d3288db98d1f5bd28c97ec2e33d8056edeb96df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:30 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67337a3998292ee11e76d56a2d3288db98d1f5bd28c97ec2e33d8056edeb96df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:30 np0005475493 podman[81950]: 2025-10-08 09:44:30.736915029 +0000 UTC m=+0.148714918 container init 1df969b0c50f9d66c35a75aa121c358ea471ac1f87acb1ba182e9d4b65c45859 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_hoover, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct  8 05:44:30 np0005475493 podman[81950]: 2025-10-08 09:44:30.742917634 +0000 UTC m=+0.154717493 container start 1df969b0c50f9d66c35a75aa121c358ea471ac1f87acb1ba182e9d4b65c45859 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_hoover, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct  8 05:44:30 np0005475493 podman[81950]: 2025-10-08 09:44:30.74607324 +0000 UTC m=+0.157873099 container attach 1df969b0c50f9d66c35a75aa121c358ea471ac1f87acb1ba182e9d4b65c45859 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_hoover, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:44:30 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:30 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  8 05:44:30 np0005475493 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) close
Oct  8 05:44:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct  8 05:44:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3155317862' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  8 05:44:31 np0005475493 bold_wing[81946]: 
Oct  8 05:44:31 np0005475493 bold_wing[81946]: {"fsid":"787292cc-8154-50c4-9e00-e9be3e817149","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":87,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":5,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1759916660,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-10-08T09:43:01:374245+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-08T09:44:24.295017+0000","services":{}},"progress_events":{}}
Oct  8 05:44:31 np0005475493 systemd[1]: libpod-d07920120f5b80baec01ae82ad87c69dfaadb3cbf8c896183a73604ebdbb84b4.scope: Deactivated successfully.
Oct  8 05:44:31 np0005475493 podman[81925]: 2025-10-08 09:44:31.058700663 +0000 UTC m=+0.574069054 container died d07920120f5b80baec01ae82ad87c69dfaadb3cbf8c896183a73604ebdbb84b4 (image=quay.io/ceph/ceph:v19, name=bold_wing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  8 05:44:31 np0005475493 systemd[1]: var-lib-containers-storage-overlay-fe875516d91d975b32dbad68a0295fd211e917c03c1f7b1689e524b5990f6600-merged.mount: Deactivated successfully.
Oct  8 05:44:31 np0005475493 podman[81925]: 2025-10-08 09:44:31.095824672 +0000 UTC m=+0.611193043 container remove d07920120f5b80baec01ae82ad87c69dfaadb3cbf8c896183a73604ebdbb84b4 (image=quay.io/ceph/ceph:v19, name=bold_wing, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  8 05:44:31 np0005475493 systemd[1]: libpod-conmon-d07920120f5b80baec01ae82ad87c69dfaadb3cbf8c896183a73604ebdbb84b4.scope: Deactivated successfully.
Oct  8 05:44:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:44:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:44:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) close
Oct  8 05:44:31 np0005475493 lvm[82083]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 05:44:31 np0005475493 lvm[82083]: VG ceph_vg0 finished
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) close
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) close
Oct  8 05:44:31 np0005475493 agitated_hoover[81969]: {}
Oct  8 05:44:31 np0005475493 systemd[1]: libpod-1df969b0c50f9d66c35a75aa121c358ea471ac1f87acb1ba182e9d4b65c45859.scope: Deactivated successfully.
Oct  8 05:44:31 np0005475493 systemd[1]: libpod-1df969b0c50f9d66c35a75aa121c358ea471ac1f87acb1ba182e9d4b65c45859.scope: Consumed 1.002s CPU time.
Oct  8 05:44:31 np0005475493 podman[81950]: 2025-10-08 09:44:31.409661762 +0000 UTC m=+0.821461641 container died 1df969b0c50f9d66c35a75aa121c358ea471ac1f87acb1ba182e9d4b65c45859 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_hoover, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct  8 05:44:31 np0005475493 systemd[1]: var-lib-containers-storage-overlay-67337a3998292ee11e76d56a2d3288db98d1f5bd28c97ec2e33d8056edeb96df-merged.mount: Deactivated successfully.
Oct  8 05:44:31 np0005475493 podman[81950]: 2025-10-08 09:44:31.451211419 +0000 UTC m=+0.863011278 container remove 1df969b0c50f9d66c35a75aa121c358ea471ac1f87acb1ba182e9d4b65c45859 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_hoover, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  8 05:44:31 np0005475493 systemd[1]: libpod-conmon-1df969b0c50f9d66c35a75aa121c358ea471ac1f87acb1ba182e9d4b65c45859.scope: Deactivated successfully.
Oct  8 05:44:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:44:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:44:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bdev(0x559f29dbcc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bdev(0x559f29dbd000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bdev(0x559f29dbd000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bdev(0x559f29dbd000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluefs mount
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluefs mount shared_bdev_used = 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: RocksDB version: 7.9.2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Git sha 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Compile date 2025-07-17 03:12:14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: DB SUMMARY
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: DB Session ID:  OKB0236OTSDNNJ5ULVKQ
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: CURRENT file:  CURRENT
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: IDENTITY file:  IDENTITY
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                         Options.error_if_exists: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.create_if_missing: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                         Options.paranoid_checks: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                                     Options.env: 0x559f29d8ddc0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                                Options.info_log: 0x559f29d917a0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.max_file_opening_threads: 16
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                              Options.statistics: (nil)
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.use_fsync: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.max_log_file_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                         Options.allow_fallocate: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.use_direct_reads: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.create_missing_column_families: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                              Options.db_log_dir: 
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                                 Options.wal_dir: db.wal
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.advise_random_on_open: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.write_buffer_manager: 0x559f29e9aa00
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                            Options.rate_limiter: (nil)
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.unordered_write: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.row_cache: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                              Options.wal_filter: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.allow_ingest_behind: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.two_write_queues: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.manual_wal_flush: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.wal_compression: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.atomic_flush: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.log_readahead_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.allow_data_in_errors: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.db_host_id: __hostname__
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.max_background_jobs: 4
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.max_background_compactions: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.max_subcompactions: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.max_open_files: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.bytes_per_sync: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.max_background_flushes: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Compression algorithms supported:
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: #011kZSTD supported: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: #011kXpressCompression supported: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: #011kBZip2Compression supported: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: #011kLZ4Compression supported: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: #011kZlibCompression supported: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: #011kSnappyCompression supported: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f28fb7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f28fb7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f28fb7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f28fb7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f28fb7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f28fb7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f28fb7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f28fb69b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f28fb69b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f28fb69b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ab27757c-4f23-4fe8-9f12-78d1a161a24a
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916671633238, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916671633484, "job": 1, "event": "recovery_finished"}
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: freelist init
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: freelist _read_cfg
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluefs umount
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bdev(0x559f29dbd000 /var/lib/ceph/osd/ceph-1/block) close
Oct  8 05:44:31 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:31 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:31 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:31 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bdev(0x559f29dbd000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bdev(0x559f29dbd000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bdev(0x559f29dbd000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluefs mount
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluefs mount shared_bdev_used = 4718592
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: RocksDB version: 7.9.2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Git sha 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Compile date 2025-07-17 03:12:14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: DB SUMMARY
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: DB Session ID:  OKB0236OTSDNNJ5ULVKR
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: CURRENT file:  CURRENT
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: IDENTITY file:  IDENTITY
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                         Options.error_if_exists: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.create_if_missing: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                         Options.paranoid_checks: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                                     Options.env: 0x559f29f3e2a0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                                Options.info_log: 0x559f29d91920
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.max_file_opening_threads: 16
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                              Options.statistics: (nil)
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.use_fsync: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.max_log_file_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                         Options.allow_fallocate: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.use_direct_reads: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.create_missing_column_families: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                              Options.db_log_dir: 
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                                 Options.wal_dir: db.wal
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.advise_random_on_open: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.write_buffer_manager: 0x559f29e9ac80
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                            Options.rate_limiter: (nil)
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.unordered_write: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.row_cache: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                              Options.wal_filter: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.allow_ingest_behind: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.two_write_queues: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.manual_wal_flush: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.wal_compression: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.atomic_flush: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.log_readahead_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.allow_data_in_errors: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.db_host_id: __hostname__
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.max_background_jobs: 4
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.max_background_compactions: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.max_subcompactions: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.max_open_files: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.bytes_per_sync: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.max_background_flushes: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Compression algorithms supported:
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: #011kZSTD supported: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: #011kXpressCompression supported: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: #011kBZip2Compression supported: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: #011kLZ4Compression supported: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: #011kZlibCompression supported: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: #011kSnappyCompression supported: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f28fb7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f28fb7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f28fb7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f28fb7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f28fb7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f28fb7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f28fb7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f28fb69b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f28fb69b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:           Options.merge_operator: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.compaction_filter_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.sst_partitioner_factory: None
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f29d91ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f28fb69b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.write_buffer_size: 16777216
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.max_write_buffer_number: 64
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.compression: LZ4
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.num_levels: 7
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.level: 32767
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.compression_opts.strategy: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                  Options.compression_opts.enabled: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.arena_block_size: 1048576
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.disable_auto_compactions: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.inplace_update_support: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.bloom_locality: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                    Options.max_successive_merges: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.paranoid_file_checks: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.force_consistency_checks: 1
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.report_bg_io_stats: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                               Options.ttl: 2592000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                       Options.enable_blob_files: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                           Options.min_blob_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                          Options.blob_file_size: 268435456
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb:                Options.blob_file_starting_level: 0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ab27757c-4f23-4fe8-9f12-78d1a161a24a
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916671888685, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916671893452, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916671, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab27757c-4f23-4fe8-9f12-78d1a161a24a", "db_session_id": "OKB0236OTSDNNJ5ULVKR", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916671897415, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916671, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab27757c-4f23-4fe8-9f12-78d1a161a24a", "db_session_id": "OKB0236OTSDNNJ5ULVKR", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916671899844, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916671, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab27757c-4f23-4fe8-9f12-78d1a161a24a", "db_session_id": "OKB0236OTSDNNJ5ULVKR", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916671901691, "job": 1, "event": "recovery_finished"}
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559f29f8e000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: DB pointer 0x559f29f4a000
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 460.80 MB usag
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: _get_class not permitted to load lua
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: _get_class not permitted to load sdk
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: osd.1 0 load_pgs
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: osd.1 0 load_pgs opened 0 pgs
Oct  8 05:44:31 np0005475493 ceph-osd[81751]: osd.1 0 log_to_monitors true
Oct  8 05:44:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1[81747]: 2025-10-08T09:44:31.932+0000 7f264c97f740 -1 osd.1 0 log_to_monitors true
Oct  8 05:44:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Oct  8 05:44:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2242474769,v1:192.168.122.100:6803/2242474769]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct  8 05:44:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:44:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Oct  8 05:44:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  8 05:44:32 np0005475493 ceph-mon[73572]: from='osd.1 [v2:192.168.122.100:6802/2242474769,v1:192.168.122.100:6803/2242474769]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct  8 05:44:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2242474769,v1:192.168.122.100:6803/2242474769]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct  8 05:44:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Oct  8 05:44:32 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Oct  8 05:44:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Oct  8 05:44:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2242474769,v1:192.168.122.100:6803/2242474769]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  8 05:44:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct  8 05:44:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  8 05:44:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  8 05:44:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  8 05:44:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  8 05:44:32 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  8 05:44:32 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  8 05:44:32 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct  8 05:44:32 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct  8 05:44:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:44:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:44:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:44:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:44:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Oct  8 05:44:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  8 05:44:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2242474769,v1:192.168.122.100:6803/2242474769]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  8 05:44:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Oct  8 05:44:33 np0005475493 ceph-osd[81751]: osd.1 0 done with init, starting boot process
Oct  8 05:44:33 np0005475493 ceph-osd[81751]: osd.1 0 start_boot
Oct  8 05:44:33 np0005475493 ceph-osd[81751]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct  8 05:44:33 np0005475493 ceph-osd[81751]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct  8 05:44:33 np0005475493 ceph-osd[81751]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct  8 05:44:33 np0005475493 ceph-osd[81751]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct  8 05:44:33 np0005475493 ceph-osd[81751]: osd.1 0  bench count 12288000 bsize 4 KiB
Oct  8 05:44:33 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Oct  8 05:44:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  8 05:44:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  8 05:44:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  8 05:44:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  8 05:44:33 np0005475493 ceph-mon[73572]: from='osd.1 [v2:192.168.122.100:6802/2242474769,v1:192.168.122.100:6803/2242474769]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct  8 05:44:33 np0005475493 ceph-mon[73572]: from='osd.1 [v2:192.168.122.100:6802/2242474769,v1:192.168.122.100:6803/2242474769]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  8 05:44:33 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:33 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:33 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:33 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  8 05:44:33 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  8 05:44:33 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2242474769; not ready for session (expect reconnect)
Oct  8 05:44:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  8 05:44:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  8 05:44:33 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  8 05:44:34 np0005475493 podman[82663]: 2025-10-08 09:44:34.04095448 +0000 UTC m=+0.086159817 container exec 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/645704721,v1:192.168.122.101:6801/645704721]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:34 np0005475493 podman[82681]: 2025-10-08 09:44:34.209205068 +0000 UTC m=+0.052815263 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:44:34 np0005475493 podman[82663]: 2025-10-08 09:44:34.222146616 +0000 UTC m=+0.267351923 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:34 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  8 05:44:34 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2242474769; not ready for session (expect reconnect)
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  8 05:44:34 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: from='osd.1 [v2:192.168.122.100:6802/2242474769,v1:192.168.122.100:6803/2242474769]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: from='osd.0 [v2:192.168.122.101:6800/645704721,v1:192.168.122.101:6801/645704721]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/645704721,v1:192.168.122.101:6801/645704721]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e8 e8: 2 total, 0 up, 2 in
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 0 up, 2 in
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/645704721,v1:192.168.122.101:6801/645704721]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e8 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-1,root=default}
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  8 05:44:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  8 05:44:34 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  8 05:44:34 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  8 05:44:35 np0005475493 podman[82835]: 2025-10-08 09:44:35.152509611 +0000 UTC m=+0.045522719 container create 3dc6f8aa48536afd09880ab22fc8bcf93076d3eac62c87361dec8f49e3e11ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:44:35 np0005475493 systemd[1]: Started libpod-conmon-3dc6f8aa48536afd09880ab22fc8bcf93076d3eac62c87361dec8f49e3e11ab0.scope.
Oct  8 05:44:35 np0005475493 podman[82835]: 2025-10-08 09:44:35.131603619 +0000 UTC m=+0.024616757 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:44:35 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:44:35 np0005475493 podman[82835]: 2025-10-08 09:44:35.252858134 +0000 UTC m=+0.145871262 container init 3dc6f8aa48536afd09880ab22fc8bcf93076d3eac62c87361dec8f49e3e11ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  8 05:44:35 np0005475493 podman[82835]: 2025-10-08 09:44:35.264237403 +0000 UTC m=+0.157250501 container start 3dc6f8aa48536afd09880ab22fc8bcf93076d3eac62c87361dec8f49e3e11ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_clarke, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Oct  8 05:44:35 np0005475493 sleepy_clarke[82849]: 167 167
Oct  8 05:44:35 np0005475493 systemd[1]: libpod-3dc6f8aa48536afd09880ab22fc8bcf93076d3eac62c87361dec8f49e3e11ab0.scope: Deactivated successfully.
Oct  8 05:44:35 np0005475493 podman[82835]: 2025-10-08 09:44:35.273836047 +0000 UTC m=+0.166849155 container attach 3dc6f8aa48536afd09880ab22fc8bcf93076d3eac62c87361dec8f49e3e11ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_clarke, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:44:35 np0005475493 podman[82835]: 2025-10-08 09:44:35.274411035 +0000 UTC m=+0.167424143 container died 3dc6f8aa48536afd09880ab22fc8bcf93076d3eac62c87361dec8f49e3e11ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_clarke, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:44:35 np0005475493 systemd[1]: var-lib-containers-storage-overlay-e1966638ce9b85886e37fa901e2c697734d2dc5e9929202cbced237cf140d4b8-merged.mount: Deactivated successfully.
Oct  8 05:44:35 np0005475493 podman[82835]: 2025-10-08 09:44:35.331590141 +0000 UTC m=+0.224603239 container remove 3dc6f8aa48536afd09880ab22fc8bcf93076d3eac62c87361dec8f49e3e11ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_clarke, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:44:35 np0005475493 systemd[1]: libpod-conmon-3dc6f8aa48536afd09880ab22fc8bcf93076d3eac62c87361dec8f49e3e11ab0.scope: Deactivated successfully.
Oct  8 05:44:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:44:35 np0005475493 podman[82871]: 2025-10-08 09:44:35.487192141 +0000 UTC m=+0.048619005 container create 1346d9a3abfaf549e34b120a8e6ae89e8d5cd2fb54c50c4a55300e4dc4170174 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:44:35 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:35 np0005475493 systemd[1]: Started libpod-conmon-1346d9a3abfaf549e34b120a8e6ae89e8d5cd2fb54c50c4a55300e4dc4170174.scope.
Oct  8 05:44:35 np0005475493 podman[82871]: 2025-10-08 09:44:35.465815515 +0000 UTC m=+0.027242439 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:44:35 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:44:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f461279a8cfc784b895aed3a69123d919356dfaea2d9c7864e5e88e059ec41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f461279a8cfc784b895aed3a69123d919356dfaea2d9c7864e5e88e059ec41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f461279a8cfc784b895aed3a69123d919356dfaea2d9c7864e5e88e059ec41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f461279a8cfc784b895aed3a69123d919356dfaea2d9c7864e5e88e059ec41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:44:35 np0005475493 podman[82871]: 2025-10-08 09:44:35.602438901 +0000 UTC m=+0.163865795 container init 1346d9a3abfaf549e34b120a8e6ae89e8d5cd2fb54c50c4a55300e4dc4170174 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hopper, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:44:35 np0005475493 podman[82871]: 2025-10-08 09:44:35.610115156 +0000 UTC m=+0.171542010 container start 1346d9a3abfaf549e34b120a8e6ae89e8d5cd2fb54c50c4a55300e4dc4170174 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hopper, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:44:35 np0005475493 podman[82871]: 2025-10-08 09:44:35.622854497 +0000 UTC m=+0.184281371 container attach 1346d9a3abfaf549e34b120a8e6ae89e8d5cd2fb54c50c4a55300e4dc4170174 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hopper, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct  8 05:44:35 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2242474769; not ready for session (expect reconnect)
Oct  8 05:44:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  8 05:44:35 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  8 05:44:35 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  8 05:44:35 np0005475493 ceph-mon[73572]: from='osd.0 [v2:192.168.122.101:6800/645704721,v1:192.168.122.101:6801/645704721]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct  8 05:44:35 np0005475493 ceph-mon[73572]: from='osd.0 [v2:192.168.122.101:6800/645704721,v1:192.168.122.101:6801/645704721]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct  8 05:44:35 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Oct  8 05:44:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  8 05:44:35 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/645704721,v1:192.168.122.101:6801/645704721]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Oct  8 05:44:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e9 e9: 2 total, 0 up, 2 in
Oct  8 05:44:35 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 0 up, 2 in
Oct  8 05:44:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  8 05:44:35 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  8 05:44:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  8 05:44:35 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  8 05:44:35 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  8 05:44:35 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  8 05:44:35 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/645704721; not ready for session (expect reconnect)
Oct  8 05:44:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  8 05:44:35 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  8 05:44:35 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  8 05:44:36 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Oct  8 05:44:36 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:36 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]: [
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:    {
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:        "available": false,
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:        "being_replaced": false,
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:        "ceph_device_lvm": false,
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:        "lsm_data": {},
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:        "lvs": [],
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:        "path": "/dev/sr0",
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:        "rejected_reasons": [
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            "Insufficient space (<5GB)",
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            "Has a FileSystem"
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:        ],
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:        "sys_api": {
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            "actuators": null,
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            "device_nodes": [
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:                "sr0"
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            ],
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            "devname": "sr0",
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            "human_readable_size": "482.00 KB",
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            "id_bus": "ata",
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            "model": "QEMU DVD-ROM",
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            "nr_requests": "2",
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            "parent": "/dev/sr0",
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            "partitions": {},
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            "path": "/dev/sr0",
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            "removable": "1",
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            "rev": "2.5+",
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            "ro": "0",
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            "rotational": "0",
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            "sas_address": "",
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            "sas_device_handle": "",
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            "scheduler_mode": "mq-deadline",
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            "sectors": 0,
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            "sectorsize": "2048",
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            "size": 493568.0,
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            "support_discard": "2048",
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            "type": "disk",
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:            "vendor": "QEMU"
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:        }
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]:    }
Oct  8 05:44:36 np0005475493 heuristic_hopper[82887]: ]
Oct  8 05:44:36 np0005475493 systemd[1]: libpod-1346d9a3abfaf549e34b120a8e6ae89e8d5cd2fb54c50c4a55300e4dc4170174.scope: Deactivated successfully.
Oct  8 05:44:36 np0005475493 podman[82871]: 2025-10-08 09:44:36.510734788 +0000 UTC m=+1.072161662 container died 1346d9a3abfaf549e34b120a8e6ae89e8d5cd2fb54c50c4a55300e4dc4170174 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True)
Oct  8 05:44:36 np0005475493 ceph-osd[81751]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 37.521 iops: 9605.469 elapsed_sec: 0.312
Oct  8 05:44:36 np0005475493 ceph-osd[81751]: log_channel(cluster) log [WRN] : OSD bench result of 9605.469339 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  8 05:44:36 np0005475493 ceph-osd[81751]: osd.1 0 waiting for initial osdmap
Oct  8 05:44:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1[81747]: 2025-10-08T09:44:36.520+0000 7f2649115640 -1 osd.1 0 waiting for initial osdmap
Oct  8 05:44:36 np0005475493 ceph-osd[81751]: osd.1 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct  8 05:44:36 np0005475493 ceph-osd[81751]: osd.1 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct  8 05:44:36 np0005475493 ceph-osd[81751]: osd.1 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct  8 05:44:36 np0005475493 ceph-osd[81751]: osd.1 9 check_osdmap_features require_osd_release unknown -> squid
Oct  8 05:44:36 np0005475493 systemd[1]: var-lib-containers-storage-overlay-53f461279a8cfc784b895aed3a69123d919356dfaea2d9c7864e5e88e059ec41-merged.mount: Deactivated successfully.
Oct  8 05:44:36 np0005475493 ceph-osd[81751]: osd.1 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  8 05:44:36 np0005475493 ceph-osd[81751]: osd.1 9 set_numa_affinity not setting numa affinity
Oct  8 05:44:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-osd-1[81747]: 2025-10-08T09:44:36.548+0000 7f2643f2a640 -1 osd.1 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  8 05:44:36 np0005475493 podman[82871]: 2025-10-08 09:44:36.551860212 +0000 UTC m=+1.113287076 container remove 1346d9a3abfaf549e34b120a8e6ae89e8d5cd2fb54c50c4a55300e4dc4170174 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True)
Oct  8 05:44:36 np0005475493 ceph-osd[81751]: osd.1 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Oct  8 05:44:36 np0005475493 systemd[1]: libpod-conmon-1346d9a3abfaf549e34b120a8e6ae89e8d5cd2fb54c50c4a55300e4dc4170174.scope: Deactivated successfully.
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct  8 05:44:36 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.8M
Oct  8 05:44:36 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.8M
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct  8 05:44:36 np0005475493 ceph-mgr[73869]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct  8 05:44:36 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct  8 05:44:36 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2242474769; not ready for session (expect reconnect)
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  8 05:44:36 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: from='osd.0 [v2:192.168.122.101:6800/645704721,v1:192.168.122.101:6801/645704721]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: Adjusting osd_memory_target on compute-1 to  5247M
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct  8 05:44:36 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/645704721; not ready for session (expect reconnect)
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  8 05:44:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  8 05:44:36 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  8 05:44:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Oct  8 05:44:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  8 05:44:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e10 e10: 2 total, 1 up, 2 in
Oct  8 05:44:37 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6802/2242474769,v1:192.168.122.100:6803/2242474769] boot
Oct  8 05:44:37 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 1 up, 2 in
Oct  8 05:44:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  8 05:44:37 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  8 05:44:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  8 05:44:37 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  8 05:44:37 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  8 05:44:37 np0005475493 ceph-osd[81751]: osd.1 10 state: booting -> active
Oct  8 05:44:37 np0005475493 ceph-mon[73572]: OSD bench result of 9605.469339 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  8 05:44:37 np0005475493 ceph-mon[73572]: Adjusting osd_memory_target on compute-0 to 127.8M
Oct  8 05:44:37 np0005475493 ceph-mon[73572]: Unable to set osd_memory_target on compute-0 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct  8 05:44:37 np0005475493 ceph-mon[73572]: osd.1 [v2:192.168.122.100:6802/2242474769,v1:192.168.122.100:6803/2242474769] boot
Oct  8 05:44:37 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/645704721; not ready for session (expect reconnect)
Oct  8 05:44:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  8 05:44:37 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  8 05:44:37 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  8 05:44:38 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct  8 05:44:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:44:38 np0005475493 ceph-mgr[73869]: [devicehealth INFO root] creating mgr pool
Oct  8 05:44:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Oct  8 05:44:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct  8 05:44:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Oct  8 05:44:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  8 05:44:38 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/645704721; not ready for session (expect reconnect)
Oct  8 05:44:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  8 05:44:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  8 05:44:38 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  8 05:44:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct  8 05:44:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e11 e11: 2 total, 1 up, 2 in
Oct  8 05:44:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Oct  8 05:44:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Oct  8 05:44:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Oct  8 05:44:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Oct  8 05:44:38 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 1 up, 2 in
Oct  8 05:44:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  8 05:44:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  8 05:44:38 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  8 05:44:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Oct  8 05:44:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct  8 05:44:38 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct  8 05:44:38 np0005475493 ceph-osd[81751]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct  8 05:44:38 np0005475493 ceph-osd[81751]: osd.1 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct  8 05:44:38 np0005475493 ceph-osd[81751]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct  8 05:44:39 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/645704721; not ready for session (expect reconnect)
Oct  8 05:44:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  8 05:44:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  8 05:44:39 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  8 05:44:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Oct  8 05:44:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct  8 05:44:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Oct  8 05:44:39 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.101:6800/645704721,v1:192.168.122.101:6801/645704721] boot
Oct  8 05:44:39 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Oct  8 05:44:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  8 05:44:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  8 05:44:39 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct  8 05:44:39 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct  8 05:44:39 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct  8 05:44:39 np0005475493 ceph-mon[73572]: osd.0 [v2:192.168.122.101:6800/645704721,v1:192.168.122.101:6801/645704721] boot
Oct  8 05:44:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v43: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct  8 05:44:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Oct  8 05:44:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Oct  8 05:44:40 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Oct  8 05:44:40 np0005475493 ceph-mon[73572]: OSD bench result of 10085.285206 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  8 05:44:41 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Oct  8 05:44:41 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Oct  8 05:44:41 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Oct  8 05:44:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct  8 05:44:42 np0005475493 ceph-mgr[73869]: [devicehealth INFO root] creating main.db for devicehealth
Oct  8 05:44:42 np0005475493 ceph-mgr[73869]: [devicehealth INFO root] Check health
Oct  8 05:44:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct  8 05:44:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct  8 05:44:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct  8 05:44:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  8 05:44:42 np0005475493 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct  8 05:44:42 np0005475493 ceph-mon[73572]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct  8 05:44:43 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.ixicfj(active, since 80s)
Oct  8 05:44:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:44:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:44:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:44:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:44:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:44:50 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:44:52 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:44:52 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:44:52 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:44:52 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:44:52 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:44:52 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:44:52 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:44:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:44:54 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:44:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:44:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:44:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:44:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:44:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct  8 05:44:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  8 05:44:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:44:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:44:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 05:44:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:44:54 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct  8 05:44:54 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct  8 05:44:55 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:44:55 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:44:55 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:55 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:55 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:55 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:55 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  8 05:44:55 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:44:55 np0005475493 ceph-mon[73572]: Updating compute-2:/etc/ceph/ceph.conf
Oct  8 05:44:55 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:44:55 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:44:56 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:44:56 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:44:56 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:44:56 np0005475493 ceph-mon[73572]: Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:44:56 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:44:56 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:56 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:44:56 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:56 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 05:44:56 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:56 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:44:56 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev a1daac8b-8bd7-4296-8123-624af205803a (Updating mon deployment (+2 -> 3))
Oct  8 05:44:56 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct  8 05:44:56 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  8 05:44:56 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct  8 05:44:56 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct  8 05:44:56 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:44:56 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:44:56 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Oct  8 05:44:56 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Oct  8 05:44:57 np0005475493 ceph-mon[73572]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:44:57 np0005475493 ceph-mon[73572]: Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:44:57 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:57 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:57 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:44:57 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  8 05:44:57 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct  8 05:44:57 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  8 05:44:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:44:58 np0005475493 ceph-mon[73572]: Deploying daemon mon.compute-2 on compute-2
Oct  8 05:44:58 np0005475493 ceph-mon[73572]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct  8 05:44:58 np0005475493 ceph-mon[73572]: Cluster is now healthy
Oct  8 05:44:58 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:45:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:45:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:45:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Oct  8 05:45:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Oct  8 05:45:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct  8 05:45:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct  8 05:45:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  8 05:45:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct  8 05:45:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct  8 05:45:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:45:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:45:00 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Oct  8 05:45:00 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Oct  8 05:45:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Oct  8 05:45:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Oct  8 05:45:00 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2171900707; not ready for session (expect reconnect)
Oct  8 05:45:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  8 05:45:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  8 05:45:00 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Oct  8 05:45:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct  8 05:45:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  8 05:45:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  8 05:45:00 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct  8 05:45:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  8 05:45:00 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Oct  8 05:45:00 np0005475493 ceph-mon[73572]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Oct  8 05:45:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  8 05:45:00 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:45:01 np0005475493 python3[84104]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:01 np0005475493 podman[84106]: 2025-10-08 09:45:01.422628059 +0000 UTC m=+0.040906117 container create d8d6ede593244305d57375942d7a7efcd5f5eb4a4eba6fbcd2e3b67c65f4aa3d (image=quay.io/ceph/ceph:v19, name=priceless_kowalevski, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct  8 05:45:01 np0005475493 systemd[1]: Started libpod-conmon-d8d6ede593244305d57375942d7a7efcd5f5eb4a4eba6fbcd2e3b67c65f4aa3d.scope.
Oct  8 05:45:01 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:01 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c87ba05174c88ee8e51dbf1b172aa4df1580c564c8706413888bea07e4dd4677/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:01 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c87ba05174c88ee8e51dbf1b172aa4df1580c564c8706413888bea07e4dd4677/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:01 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c87ba05174c88ee8e51dbf1b172aa4df1580c564c8706413888bea07e4dd4677/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:01 np0005475493 podman[84106]: 2025-10-08 09:45:01.486174055 +0000 UTC m=+0.104452113 container init d8d6ede593244305d57375942d7a7efcd5f5eb4a4eba6fbcd2e3b67c65f4aa3d (image=quay.io/ceph/ceph:v19, name=priceless_kowalevski, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:45:01 np0005475493 podman[84106]: 2025-10-08 09:45:01.495120316 +0000 UTC m=+0.113398374 container start d8d6ede593244305d57375942d7a7efcd5f5eb4a4eba6fbcd2e3b67c65f4aa3d (image=quay.io/ceph/ceph:v19, name=priceless_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct  8 05:45:01 np0005475493 podman[84106]: 2025-10-08 09:45:01.404027728 +0000 UTC m=+0.022305816 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:01 np0005475493 podman[84106]: 2025-10-08 09:45:01.498989337 +0000 UTC m=+0.117267395 container attach d8d6ede593244305d57375942d7a7efcd5f5eb4a4eba6fbcd2e3b67c65f4aa3d (image=quay.io/ceph/ceph:v19, name=priceless_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  8 05:45:01 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct  8 05:45:01 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2171900707; not ready for session (expect reconnect)
Oct  8 05:45:01 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  8 05:45:01 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  8 05:45:01 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct  8 05:45:01 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct  8 05:45:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct  8 05:45:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:45:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct  8 05:45:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct  8 05:45:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct  8 05:45:02 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3650877272; not ready for session (expect reconnect)
Oct  8 05:45:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  8 05:45:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  8 05:45:02 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct  8 05:45:02 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2171900707; not ready for session (expect reconnect)
Oct  8 05:45:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  8 05:45:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  8 05:45:02 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct  8 05:45:02 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:45:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct  8 05:45:03 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3650877272; not ready for session (expect reconnect)
Oct  8 05:45:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  8 05:45:03 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  8 05:45:03 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct  8 05:45:03 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2171900707; not ready for session (expect reconnect)
Oct  8 05:45:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  8 05:45:03 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  8 05:45:03 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct  8 05:45:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct  8 05:45:04 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3650877272; not ready for session (expect reconnect)
Oct  8 05:45:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  8 05:45:04 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  8 05:45:04 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct  8 05:45:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct  8 05:45:04 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2171900707; not ready for session (expect reconnect)
Oct  8 05:45:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  8 05:45:04 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  8 05:45:04 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct  8 05:45:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct  8 05:45:04 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct  8 05:45:05 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3650877272; not ready for session (expect reconnect)
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  8 05:45:05 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct  8 05:45:05 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2171900707; not ready for session (expect reconnect)
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  8 05:45:05 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : monmap epoch 2
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : last_changed 2025-10-08T09:45:00.661832+0000
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : created 2025-10-08T09:42:59.307631+0000
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsmap 
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.ixicfj(active, since 103s)
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:05 np0005475493 ceph-mgr[73869]: [progress INFO root] complete: finished ev a1daac8b-8bd7-4296-8123-624af205803a (Updating mon deployment (+2 -> 3))
Oct  8 05:45:05 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event a1daac8b-8bd7-4296-8123-624af205803a (Updating mon deployment (+2 -> 3)) in 9 seconds
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:05 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev 4f6e05db-358c-451e-8e62-6c11a418e1af (Updating mgr deployment (+2 -> 3))
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.mtagwx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.mtagwx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: Deploying daemon mon.compute-1 on compute-1
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: mon.compute-0 calling monitor election
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: mon.compute-2 calling monitor election
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: overall HEALTH_OK
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.mtagwx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:45:05 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:45:05 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.mtagwx on compute-2
Oct  8 05:45:05 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.mtagwx on compute-2
Oct  8 05:45:06 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct  8 05:45:06 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2539592381' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  8 05:45:06 np0005475493 priceless_kowalevski[84122]: 
Oct  8 05:45:06 np0005475493 priceless_kowalevski[84122]: {"fsid":"787292cc-8154-50c4-9e00-e9be3e817149","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":10,"quorum":[0,1],"quorum_names":["compute-0","compute-2"],"quorum_age":0,"monmap":{"epoch":2,"min_mon_release_name":"squid","num_mons":2},"osdmap":{"epoch":14,"num_osds":2,"num_up_osds":2,"osd_up_since":1759916679,"num_in_osds":2,"osd_in_since":1759916660,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":475242496,"bytes_avail":42466041856,"bytes_total":42941284352},"fsmap":{"epoch":1,"btime":"2025-10-08T09:43:01:374245+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-08T09:44:24.295017+0000","services":{}},"progress_events":{"a1daac8b-8bd7-4296-8123-624af205803a":{"message":"Updating mon deployment (+2 -> 3) (3s)\n      [==============..............] (remaining: 3s)","progress":0.5,"add_to_ceph_s":true}}}
Oct  8 05:45:06 np0005475493 systemd[1]: libpod-d8d6ede593244305d57375942d7a7efcd5f5eb4a4eba6fbcd2e3b67c65f4aa3d.scope: Deactivated successfully.
Oct  8 05:45:06 np0005475493 podman[84106]: 2025-10-08 09:45:06.320491953 +0000 UTC m=+4.938770051 container died d8d6ede593244305d57375942d7a7efcd5f5eb4a4eba6fbcd2e3b67c65f4aa3d (image=quay.io/ceph/ceph:v19, name=priceless_kowalevski, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  8 05:45:06 np0005475493 systemd[1]: var-lib-containers-storage-overlay-c87ba05174c88ee8e51dbf1b172aa4df1580c564c8706413888bea07e4dd4677-merged.mount: Deactivated successfully.
Oct  8 05:45:06 np0005475493 podman[84106]: 2025-10-08 09:45:06.369090649 +0000 UTC m=+4.987368707 container remove d8d6ede593244305d57375942d7a7efcd5f5eb4a4eba6fbcd2e3b67c65f4aa3d (image=quay.io/ceph/ceph:v19, name=priceless_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True)
Oct  8 05:45:06 np0005475493 systemd[1]: libpod-conmon-d8d6ede593244305d57375942d7a7efcd5f5eb4a4eba6fbcd2e3b67c65f4aa3d.scope: Deactivated successfully.
Oct  8 05:45:06 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct  8 05:45:06 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Oct  8 05:45:06 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3650877272; not ready for session (expect reconnect)
Oct  8 05:45:06 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  8 05:45:06 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  8 05:45:06 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct  8 05:45:06 np0005475493 ceph-mon[73572]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct  8 05:45:06 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  8 05:45:06 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Oct  8 05:45:06 np0005475493 ceph-mon[73572]: paxos.0).electionLogic(10) init, last seen epoch 10
Oct  8 05:45:06 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  8 05:45:06 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  8 05:45:06 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  8 05:45:06 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  8 05:45:06 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  8 05:45:06 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct  8 05:45:06 np0005475493 ceph-mgr[73869]: mgr.server handle_report got status from non-daemon mon.compute-2
Oct  8 05:45:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:06.663+0000 7fa814663640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Oct  8 05:45:06 np0005475493 python3[84186]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:06 np0005475493 podman[84187]: 2025-10-08 09:45:06.928143736 +0000 UTC m=+0.042438670 container create a515da464a8fda7282d9faa004e04b6c68da658e287a958dca8b131776174cdc (image=quay.io/ceph/ceph:v19, name=intelligent_faraday, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:45:06 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:45:06 np0005475493 systemd[1]: Started libpod-conmon-a515da464a8fda7282d9faa004e04b6c68da658e287a958dca8b131776174cdc.scope.
Oct  8 05:45:06 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:06 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/188c5be260ce5df253dd263ebb6d0582559fba0978701d3d9e349877dfd2c1d0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:06 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/188c5be260ce5df253dd263ebb6d0582559fba0978701d3d9e349877dfd2c1d0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:07 np0005475493 podman[84187]: 2025-10-08 09:45:06.90748977 +0000 UTC m=+0.021784734 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:07 np0005475493 podman[84187]: 2025-10-08 09:45:07.007103811 +0000 UTC m=+0.121398775 container init a515da464a8fda7282d9faa004e04b6c68da658e287a958dca8b131776174cdc (image=quay.io/ceph/ceph:v19, name=intelligent_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  8 05:45:07 np0005475493 podman[84187]: 2025-10-08 09:45:07.01262414 +0000 UTC m=+0.126919074 container start a515da464a8fda7282d9faa004e04b6c68da658e287a958dca8b131776174cdc (image=quay.io/ceph/ceph:v19, name=intelligent_faraday, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct  8 05:45:07 np0005475493 podman[84187]: 2025-10-08 09:45:07.017830666 +0000 UTC m=+0.132125600 container attach a515da464a8fda7282d9faa004e04b6c68da658e287a958dca8b131776174cdc (image=quay.io/ceph/ceph:v19, name=intelligent_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  8 05:45:07 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  8 05:45:07 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  8 05:45:07 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3650877272; not ready for session (expect reconnect)
Oct  8 05:45:07 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  8 05:45:07 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  8 05:45:07 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct  8 05:45:07 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  8 05:45:07 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:45:07 np0005475493 ceph-mgr[73869]: [progress INFO root] Writing back 3 completed events
Oct  8 05:45:07 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  8 05:45:07 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  8 05:45:07 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  8 05:45:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  8 05:45:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  8 05:45:08 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3650877272; not ready for session (expect reconnect)
Oct  8 05:45:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  8 05:45:08 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  8 05:45:08 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct  8 05:45:08 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:45:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  8 05:45:09 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3650877272; not ready for session (expect reconnect)
Oct  8 05:45:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  8 05:45:09 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  8 05:45:09 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct  8 05:45:10 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  8 05:45:10 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  8 05:45:10 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3650877272; not ready for session (expect reconnect)
Oct  8 05:45:10 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  8 05:45:10 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  8 05:45:10 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct  8 05:45:10 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  8 05:45:10 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  8 05:45:10 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  8 05:45:10 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  8 05:45:11 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3650877272; not ready for session (expect reconnect)
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  8 05:45:11 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : monmap epoch 3
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsid 787292cc-8154-50c4-9e00-e9be3e817149
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : last_changed 2025-10-08T09:45:06.514939+0000
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : created 2025-10-08T09:42:59.307631+0000
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsmap 
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.ixicfj(active, since 109s)
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.swlvov", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.swlvov", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: mon.compute-0 calling monitor election
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: mon.compute-2 calling monitor election
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: mon.compute-1 calling monitor election
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: overall HEALTH_OK
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.swlvov", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:45:11 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:45:11 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.swlvov on compute-1
Oct  8 05:45:11 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.swlvov on compute-1
Oct  8 05:45:12 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3650877272; not ready for session (expect reconnect)
Oct  8 05:45:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  8 05:45:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  8 05:45:12 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:12 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.swlvov", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  8 05:45:12 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.swlvov", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  8 05:45:12 np0005475493 ceph-mon[73572]: Deploying daemon mgr.compute-1.swlvov on compute-1
Oct  8 05:45:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:45:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:45:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:45:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct  8 05:45:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:13 np0005475493 ceph-mgr[73869]: [progress INFO root] complete: finished ev 4f6e05db-358c-451e-8e62-6c11a418e1af (Updating mgr deployment (+2 -> 3))
Oct  8 05:45:13 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event 4f6e05db-358c-451e-8e62-6c11a418e1af (Updating mgr deployment (+2 -> 3)) in 8 seconds
Oct  8 05:45:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct  8 05:45:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:13 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev 6b090588-9c5d-45b5-8b61-76caf7676272 (Updating crash deployment (+1 -> 3))
Oct  8 05:45:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct  8 05:45:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  8 05:45:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  8 05:45:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:45:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:45:13 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Oct  8 05:45:13 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Oct  8 05:45:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:45:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct  8 05:45:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/413234013' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  8 05:45:13 np0005475493 ceph-mgr[73869]: mgr.server handle_report got status from non-daemon mon.compute-1
Oct  8 05:45:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:13.519+0000 7fa814663640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Oct  8 05:45:14 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:14 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:14 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:14 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:14 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  8 05:45:14 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  8 05:45:14 np0005475493 ceph-mon[73572]: Deploying daemon crash.compute-2 on compute-2
Oct  8 05:45:14 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/413234013' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  8 05:45:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Oct  8 05:45:14 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/413234013' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  8 05:45:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Oct  8 05:45:14 np0005475493 intelligent_faraday[84203]: pool 'vms' created
Oct  8 05:45:14 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Oct  8 05:45:14 np0005475493 systemd[1]: libpod-a515da464a8fda7282d9faa004e04b6c68da658e287a958dca8b131776174cdc.scope: Deactivated successfully.
Oct  8 05:45:14 np0005475493 podman[84187]: 2025-10-08 09:45:14.349689513 +0000 UTC m=+7.463984447 container died a515da464a8fda7282d9faa004e04b6c68da658e287a958dca8b131776174cdc (image=quay.io/ceph/ceph:v19, name=intelligent_faraday, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  8 05:45:14 np0005475493 systemd[1]: var-lib-containers-storage-overlay-188c5be260ce5df253dd263ebb6d0582559fba0978701d3d9e349877dfd2c1d0-merged.mount: Deactivated successfully.
Oct  8 05:45:14 np0005475493 podman[84187]: 2025-10-08 09:45:14.421599496 +0000 UTC m=+7.535894430 container remove a515da464a8fda7282d9faa004e04b6c68da658e287a958dca8b131776174cdc (image=quay.io/ceph/ceph:v19, name=intelligent_faraday, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  8 05:45:14 np0005475493 systemd[1]: libpod-conmon-a515da464a8fda7282d9faa004e04b6c68da658e287a958dca8b131776174cdc.scope: Deactivated successfully.
Oct  8 05:45:14 np0005475493 python3[84268]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v64: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:45:14 np0005475493 podman[84269]: 2025-10-08 09:45:14.865870482 +0000 UTC m=+0.020030983 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:15 np0005475493 podman[84269]: 2025-10-08 09:45:15.076258289 +0000 UTC m=+0.230418740 container create aa39ca2ecf9c4eb7bfb412e1021924fb9fe234599c208db56d6bf6f32dc7beed (image=quay.io/ceph/ceph:v19, name=nice_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:45:15 np0005475493 systemd[1]: Started libpod-conmon-aa39ca2ecf9c4eb7bfb412e1021924fb9fe234599c208db56d6bf6f32dc7beed.scope.
Oct  8 05:45:15 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:15 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b11d5a3a564521372d79f314c15ce41051c122e1225ad50d10c1acbe490c9e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:15 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b11d5a3a564521372d79f314c15ce41051c122e1225ad50d10c1acbe490c9e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:45:15 np0005475493 podman[84269]: 2025-10-08 09:45:15.303417491 +0000 UTC m=+0.457577962 container init aa39ca2ecf9c4eb7bfb412e1021924fb9fe234599c208db56d6bf6f32dc7beed (image=quay.io/ceph/ceph:v19, name=nice_allen, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:45:15 np0005475493 podman[84269]: 2025-10-08 09:45:15.313045409 +0000 UTC m=+0.467205860 container start aa39ca2ecf9c4eb7bfb412e1021924fb9fe234599c208db56d6bf6f32dc7beed (image=quay.io/ceph/ceph:v19, name=nice_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  8 05:45:15 np0005475493 podman[84269]: 2025-10-08 09:45:15.347606653 +0000 UTC m=+0.501767104 container attach aa39ca2ecf9c4eb7bfb412e1021924fb9fe234599c208db56d6bf6f32dc7beed (image=quay.io/ceph/ceph:v19, name=nice_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/413234013' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:15 np0005475493 ceph-mgr[73869]: [progress INFO root] complete: finished ev 6b090588-9c5d-45b5-8b61-76caf7676272 (Updating crash deployment (+1 -> 3))
Oct  8 05:45:15 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event 6b090588-9c5d-45b5-8b61-76caf7676272 (Updating crash deployment (+1 -> 3)) in 2 seconds
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct  8 05:45:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2222990356' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  8 05:45:16 np0005475493 podman[84401]: 2025-10-08 09:45:16.009558569 +0000 UTC m=+0.019349855 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:45:16 np0005475493 podman[84401]: 2025-10-08 09:45:16.161182766 +0000 UTC m=+0.170974022 container create fca70e323dd8fcde7b610d8c370eb1a7947d67c1fb79eb5ff9d73cc03efe1737 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:45:16 np0005475493 systemd[1]: Started libpod-conmon-fca70e323dd8fcde7b610d8c370eb1a7947d67c1fb79eb5ff9d73cc03efe1737.scope.
Oct  8 05:45:16 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:16 np0005475493 podman[84401]: 2025-10-08 09:45:16.39802007 +0000 UTC m=+0.407811346 container init fca70e323dd8fcde7b610d8c370eb1a7947d67c1fb79eb5ff9d73cc03efe1737 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_germain, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  8 05:45:16 np0005475493 podman[84401]: 2025-10-08 09:45:16.403534528 +0000 UTC m=+0.413325784 container start fca70e323dd8fcde7b610d8c370eb1a7947d67c1fb79eb5ff9d73cc03efe1737 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_germain, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  8 05:45:16 np0005475493 competent_germain[84418]: 167 167
Oct  8 05:45:16 np0005475493 systemd[1]: libpod-fca70e323dd8fcde7b610d8c370eb1a7947d67c1fb79eb5ff9d73cc03efe1737.scope: Deactivated successfully.
Oct  8 05:45:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Oct  8 05:45:16 np0005475493 podman[84401]: 2025-10-08 09:45:16.470258176 +0000 UTC m=+0.480049532 container attach fca70e323dd8fcde7b610d8c370eb1a7947d67c1fb79eb5ff9d73cc03efe1737 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:45:16 np0005475493 podman[84401]: 2025-10-08 09:45:16.470749577 +0000 UTC m=+0.480540873 container died fca70e323dd8fcde7b610d8c370eb1a7947d67c1fb79eb5ff9d73cc03efe1737 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_germain, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:45:16 np0005475493 ceph-mon[73572]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  8 05:45:16 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:16 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:16 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:16 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:45:16 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:45:16 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/2222990356' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  8 05:45:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2222990356' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  8 05:45:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Oct  8 05:45:16 np0005475493 nice_allen[84285]: pool 'volumes' created
Oct  8 05:45:16 np0005475493 systemd[1]: var-lib-containers-storage-overlay-d4aaf7c4f3b079466f23c4f77442fb42509a5d5497f41e1258605da179678cf1-merged.mount: Deactivated successfully.
Oct  8 05:45:16 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Oct  8 05:45:16 np0005475493 podman[84401]: 2025-10-08 09:45:16.552129402 +0000 UTC m=+0.561920658 container remove fca70e323dd8fcde7b610d8c370eb1a7947d67c1fb79eb5ff9d73cc03efe1737 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_germain, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:45:16 np0005475493 systemd[1]: libpod-aa39ca2ecf9c4eb7bfb412e1021924fb9fe234599c208db56d6bf6f32dc7beed.scope: Deactivated successfully.
Oct  8 05:45:16 np0005475493 podman[84269]: 2025-10-08 09:45:16.554650287 +0000 UTC m=+1.708810738 container died aa39ca2ecf9c4eb7bfb412e1021924fb9fe234599c208db56d6bf6f32dc7beed (image=quay.io/ceph/ceph:v19, name=nice_allen, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:45:16 np0005475493 systemd[1]: var-lib-containers-storage-overlay-10b11d5a3a564521372d79f314c15ce41051c122e1225ad50d10c1acbe490c9e-merged.mount: Deactivated successfully.
Oct  8 05:45:16 np0005475493 ceph-mgr[73869]: [progress INFO root] Writing back 5 completed events
Oct  8 05:45:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  8 05:45:16 np0005475493 podman[84269]: 2025-10-08 09:45:16.602220119 +0000 UTC m=+1.756380570 container remove aa39ca2ecf9c4eb7bfb412e1021924fb9fe234599c208db56d6bf6f32dc7beed (image=quay.io/ceph/ceph:v19, name=nice_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:45:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:16 np0005475493 systemd[1]: libpod-conmon-aa39ca2ecf9c4eb7bfb412e1021924fb9fe234599c208db56d6bf6f32dc7beed.scope: Deactivated successfully.
Oct  8 05:45:16 np0005475493 systemd[1]: libpod-conmon-fca70e323dd8fcde7b610d8c370eb1a7947d67c1fb79eb5ff9d73cc03efe1737.scope: Deactivated successfully.
Oct  8 05:45:16 np0005475493 podman[84454]: 2025-10-08 09:45:16.716115273 +0000 UTC m=+0.044990977 container create 80edc1149d7f2a8fe8271e6ac9881d5f0aa9a1f88ca2384e96c71ee8b48369bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bell, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:45:16 np0005475493 systemd[1]: Started libpod-conmon-80edc1149d7f2a8fe8271e6ac9881d5f0aa9a1f88ca2384e96c71ee8b48369bd.scope.
Oct  8 05:45:16 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:16 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a191d926bff0b2f121f93f8cb007c3962163067894e7c28bf34a53fb85f286c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:16 np0005475493 podman[84454]: 2025-10-08 09:45:16.692250194 +0000 UTC m=+0.021125918 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:45:16 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a191d926bff0b2f121f93f8cb007c3962163067894e7c28bf34a53fb85f286c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:16 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a191d926bff0b2f121f93f8cb007c3962163067894e7c28bf34a53fb85f286c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:16 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a191d926bff0b2f121f93f8cb007c3962163067894e7c28bf34a53fb85f286c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:16 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a191d926bff0b2f121f93f8cb007c3962163067894e7c28bf34a53fb85f286c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:16 np0005475493 podman[84454]: 2025-10-08 09:45:16.805885176 +0000 UTC m=+0.134760890 container init 80edc1149d7f2a8fe8271e6ac9881d5f0aa9a1f88ca2384e96c71ee8b48369bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:45:16 np0005475493 podman[84454]: 2025-10-08 09:45:16.812849066 +0000 UTC m=+0.141724770 container start 80edc1149d7f2a8fe8271e6ac9881d5f0aa9a1f88ca2384e96c71ee8b48369bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:45:16 np0005475493 podman[84454]: 2025-10-08 09:45:16.821605378 +0000 UTC m=+0.150481102 container attach 80edc1149d7f2a8fe8271e6ac9881d5f0aa9a1f88ca2384e96c71ee8b48369bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bell, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  8 05:45:16 np0005475493 python3[84493]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:16 np0005475493 podman[84501]: 2025-10-08 09:45:16.93979109 +0000 UTC m=+0.045224057 container create 4c9b9b6e8de74b49d4551b1524bd6330b91bb45668d77b8384c7a0090556c8d1 (image=quay.io/ceph/ceph:v19, name=wonderful_feynman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:45:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v67: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:45:16 np0005475493 systemd[1]: Started libpod-conmon-4c9b9b6e8de74b49d4551b1524bd6330b91bb45668d77b8384c7a0090556c8d1.scope.
Oct  8 05:45:17 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:17 np0005475493 podman[84501]: 2025-10-08 09:45:16.91878986 +0000 UTC m=+0.024222807 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:17 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92f8bdd67e6680adba3e4a589b635fbafe3539cffc1acbb6e355174d2d399758/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:17 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92f8bdd67e6680adba3e4a589b635fbafe3539cffc1acbb6e355174d2d399758/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:17 np0005475493 podman[84501]: 2025-10-08 09:45:17.026458275 +0000 UTC m=+0.131891202 container init 4c9b9b6e8de74b49d4551b1524bd6330b91bb45668d77b8384c7a0090556c8d1 (image=quay.io/ceph/ceph:v19, name=wonderful_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  8 05:45:17 np0005475493 podman[84501]: 2025-10-08 09:45:17.036508402 +0000 UTC m=+0.141941329 container start 4c9b9b6e8de74b49d4551b1524bd6330b91bb45668d77b8384c7a0090556c8d1 (image=quay.io/ceph/ceph:v19, name=wonderful_feynman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  8 05:45:17 np0005475493 podman[84501]: 2025-10-08 09:45:17.041962058 +0000 UTC m=+0.147395005 container attach 4c9b9b6e8de74b49d4551b1524bd6330b91bb45668d77b8384c7a0090556c8d1 (image=quay.io/ceph/ceph:v19, name=wonderful_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:45:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 17 pg[3.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:17 np0005475493 boring_bell[84494]: --> passed data devices: 0 physical, 1 LVM
Oct  8 05:45:17 np0005475493 boring_bell[84494]: --> All data devices are unavailable
Oct  8 05:45:17 np0005475493 systemd[1]: libpod-80edc1149d7f2a8fe8271e6ac9881d5f0aa9a1f88ca2384e96c71ee8b48369bd.scope: Deactivated successfully.
Oct  8 05:45:17 np0005475493 podman[84454]: 2025-10-08 09:45:17.179386838 +0000 UTC m=+0.508262542 container died 80edc1149d7f2a8fe8271e6ac9881d5f0aa9a1f88ca2384e96c71ee8b48369bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Oct  8 05:45:17 np0005475493 systemd[1]: var-lib-containers-storage-overlay-7a191d926bff0b2f121f93f8cb007c3962163067894e7c28bf34a53fb85f286c-merged.mount: Deactivated successfully.
Oct  8 05:45:17 np0005475493 podman[84454]: 2025-10-08 09:45:17.248811277 +0000 UTC m=+0.577686981 container remove 80edc1149d7f2a8fe8271e6ac9881d5f0aa9a1f88ca2384e96c71ee8b48369bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bell, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  8 05:45:17 np0005475493 systemd[1]: libpod-conmon-80edc1149d7f2a8fe8271e6ac9881d5f0aa9a1f88ca2384e96c71ee8b48369bd.scope: Deactivated successfully.
Oct  8 05:45:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct  8 05:45:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3583095774' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  8 05:45:17 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/2222990356' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  8 05:45:17 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:17 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/3583095774' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  8 05:45:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "ef552a3d-427a-4a30-bf26-d668cd69b923"} v 0)
Oct  8 05:45:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ef552a3d-427a-4a30-bf26-d668cd69b923"}]: dispatch
Oct  8 05:45:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Oct  8 05:45:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3583095774' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  8 05:45:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ef552a3d-427a-4a30-bf26-d668cd69b923"}]': finished
Oct  8 05:45:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Oct  8 05:45:17 np0005475493 wonderful_feynman[84517]: pool 'backups' created
Oct  8 05:45:17 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Oct  8 05:45:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  8 05:45:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  8 05:45:17 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  8 05:45:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 18 pg[4.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:17 np0005475493 systemd[1]: libpod-4c9b9b6e8de74b49d4551b1524bd6330b91bb45668d77b8384c7a0090556c8d1.scope: Deactivated successfully.
Oct  8 05:45:17 np0005475493 podman[84501]: 2025-10-08 09:45:17.623403134 +0000 UTC m=+0.728836051 container died 4c9b9b6e8de74b49d4551b1524bd6330b91bb45668d77b8384c7a0090556c8d1 (image=quay.io/ceph/ceph:v19, name=wonderful_feynman, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:45:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 18 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:17 np0005475493 systemd[1]: var-lib-containers-storage-overlay-92f8bdd67e6680adba3e4a589b635fbafe3539cffc1acbb6e355174d2d399758-merged.mount: Deactivated successfully.
Oct  8 05:45:17 np0005475493 podman[84501]: 2025-10-08 09:45:17.697099101 +0000 UTC m=+0.802532038 container remove 4c9b9b6e8de74b49d4551b1524bd6330b91bb45668d77b8384c7a0090556c8d1 (image=quay.io/ceph/ceph:v19, name=wonderful_feynman, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:45:17 np0005475493 systemd[1]: libpod-conmon-4c9b9b6e8de74b49d4551b1524bd6330b91bb45668d77b8384c7a0090556c8d1.scope: Deactivated successfully.
Oct  8 05:45:17 np0005475493 podman[84664]: 2025-10-08 09:45:17.844091747 +0000 UTC m=+0.042972473 container create 264b7827657d6019b9e89715eb5925fdf90c79300471ba4ddf97f7ef9f880c10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_agnesi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:45:17 np0005475493 systemd[1]: Started libpod-conmon-264b7827657d6019b9e89715eb5925fdf90c79300471ba4ddf97f7ef9f880c10.scope.
Oct  8 05:45:17 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:17 np0005475493 podman[84664]: 2025-10-08 09:45:17.914364762 +0000 UTC m=+0.113245518 container init 264b7827657d6019b9e89715eb5925fdf90c79300471ba4ddf97f7ef9f880c10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_agnesi, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  8 05:45:17 np0005475493 podman[84664]: 2025-10-08 09:45:17.922630334 +0000 UTC m=+0.121511060 container start 264b7827657d6019b9e89715eb5925fdf90c79300471ba4ddf97f7ef9f880c10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_agnesi, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  8 05:45:17 np0005475493 podman[84664]: 2025-10-08 09:45:17.827820542 +0000 UTC m=+0.026701288 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:45:17 np0005475493 podman[84664]: 2025-10-08 09:45:17.925768845 +0000 UTC m=+0.124649601 container attach 264b7827657d6019b9e89715eb5925fdf90c79300471ba4ddf97f7ef9f880c10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_agnesi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  8 05:45:17 np0005475493 intelligent_agnesi[84705]: 167 167
Oct  8 05:45:17 np0005475493 systemd[1]: libpod-264b7827657d6019b9e89715eb5925fdf90c79300471ba4ddf97f7ef9f880c10.scope: Deactivated successfully.
Oct  8 05:45:17 np0005475493 podman[84664]: 2025-10-08 09:45:17.92901997 +0000 UTC m=+0.127900696 container died 264b7827657d6019b9e89715eb5925fdf90c79300471ba4ddf97f7ef9f880c10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_agnesi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:45:17 np0005475493 systemd[1]: var-lib-containers-storage-overlay-ef8ab4c5fb1cd573d170484e76da240c2a69fa7dd6c7e1b6a3420a25aa6fc336-merged.mount: Deactivated successfully.
Oct  8 05:45:17 np0005475493 podman[84664]: 2025-10-08 09:45:17.962882974 +0000 UTC m=+0.161763700 container remove 264b7827657d6019b9e89715eb5925fdf90c79300471ba4ddf97f7ef9f880c10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_agnesi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:45:17 np0005475493 systemd[1]: libpod-conmon-264b7827657d6019b9e89715eb5925fdf90c79300471ba4ddf97f7ef9f880c10.scope: Deactivated successfully.
Oct  8 05:45:18 np0005475493 python3[84707]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:18 np0005475493 podman[84725]: 2025-10-08 09:45:18.09995856 +0000 UTC m=+0.051828571 container create 7d505846ae5612903e0b8a8c43d8cd26f20dc21d4255cb6a6dfea69d5faa8726 (image=quay.io/ceph/ceph:v19, name=unruffled_lovelace, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:45:18 np0005475493 systemd[1]: Started libpod-conmon-7d505846ae5612903e0b8a8c43d8cd26f20dc21d4255cb6a6dfea69d5faa8726.scope.
Oct  8 05:45:18 np0005475493 podman[84740]: 2025-10-08 09:45:18.150228515 +0000 UTC m=+0.068338846 container create 4b56845d6cec65aa0a14424f55847cdea2e1e33e41bb814b2912afbad4552408 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_hellman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:45:18 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:18 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3ea1dcf3947b191752b402681f7d6449f6b28beb27563da3223d9f76e6079bb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:18 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3ea1dcf3947b191752b402681f7d6449f6b28beb27563da3223d9f76e6079bb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:18 np0005475493 systemd[1]: Started libpod-conmon-4b56845d6cec65aa0a14424f55847cdea2e1e33e41bb814b2912afbad4552408.scope.
Oct  8 05:45:18 np0005475493 podman[84725]: 2025-10-08 09:45:18.076308318 +0000 UTC m=+0.028178339 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:18 np0005475493 podman[84725]: 2025-10-08 09:45:18.175062555 +0000 UTC m=+0.126932546 container init 7d505846ae5612903e0b8a8c43d8cd26f20dc21d4255cb6a6dfea69d5faa8726 (image=quay.io/ceph/ceph:v19, name=unruffled_lovelace, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct  8 05:45:18 np0005475493 podman[84725]: 2025-10-08 09:45:18.181056853 +0000 UTC m=+0.132926824 container start 7d505846ae5612903e0b8a8c43d8cd26f20dc21d4255cb6a6dfea69d5faa8726 (image=quay.io/ceph/ceph:v19, name=unruffled_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  8 05:45:18 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:18 np0005475493 podman[84725]: 2025-10-08 09:45:18.185543149 +0000 UTC m=+0.137413120 container attach 7d505846ae5612903e0b8a8c43d8cd26f20dc21d4255cb6a6dfea69d5faa8726 (image=quay.io/ceph/ceph:v19, name=unruffled_lovelace, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  8 05:45:18 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f4ea1e12f056d3b91ef60f321c7ad850cfc852d00f72939f9b2cf1a60bb29e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:18 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f4ea1e12f056d3b91ef60f321c7ad850cfc852d00f72939f9b2cf1a60bb29e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:18 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f4ea1e12f056d3b91ef60f321c7ad850cfc852d00f72939f9b2cf1a60bb29e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:18 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f4ea1e12f056d3b91ef60f321c7ad850cfc852d00f72939f9b2cf1a60bb29e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:18 np0005475493 podman[84740]: 2025-10-08 09:45:18.196233322 +0000 UTC m=+0.114343653 container init 4b56845d6cec65aa0a14424f55847cdea2e1e33e41bb814b2912afbad4552408 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_hellman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct  8 05:45:18 np0005475493 podman[84740]: 2025-10-08 09:45:18.20412254 +0000 UTC m=+0.122232871 container start 4b56845d6cec65aa0a14424f55847cdea2e1e33e41bb814b2912afbad4552408 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_hellman, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:45:18 np0005475493 podman[84740]: 2025-10-08 09:45:18.207183277 +0000 UTC m=+0.125293608 container attach 4b56845d6cec65aa0a14424f55847cdea2e1e33e41bb814b2912afbad4552408 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:45:18 np0005475493 podman[84740]: 2025-10-08 09:45:18.126479629 +0000 UTC m=+0.044590040 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:45:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:45:18 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.mtagwx started
Oct  8 05:45:18 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from mgr.compute-2.mtagwx 192.168.122.102:0/1031292428; not ready for session (expect reconnect)
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]: {
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:    "1": [
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:        {
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:            "devices": [
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:                "/dev/loop3"
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:            ],
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:            "lv_name": "ceph_lv0",
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:            "lv_size": "21470642176",
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:            "name": "ceph_lv0",
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:            "tags": {
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:                "ceph.cephx_lockbox_secret": "",
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:                "ceph.cluster_name": "ceph",
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:                "ceph.crush_device_class": "",
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:                "ceph.encrypted": "0",
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:                "ceph.osd_id": "1",
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:                "ceph.type": "block",
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:                "ceph.vdo": "0",
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:                "ceph.with_tpm": "0"
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:            },
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:            "type": "block",
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:            "vg_name": "ceph_vg0"
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:        }
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]:    ]
Oct  8 05:45:18 np0005475493 dazzling_hellman[84766]: }
Oct  8 05:45:18 np0005475493 systemd[1]: libpod-4b56845d6cec65aa0a14424f55847cdea2e1e33e41bb814b2912afbad4552408.scope: Deactivated successfully.
Oct  8 05:45:18 np0005475493 podman[84740]: 2025-10-08 09:45:18.526328494 +0000 UTC m=+0.444438855 container died 4b56845d6cec65aa0a14424f55847cdea2e1e33e41bb814b2912afbad4552408 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:45:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct  8 05:45:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/739388561' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  8 05:45:18 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.102:0/3019668088' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ef552a3d-427a-4a30-bf26-d668cd69b923"}]: dispatch
Oct  8 05:45:18 np0005475493 ceph-mon[73572]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ef552a3d-427a-4a30-bf26-d668cd69b923"}]: dispatch
Oct  8 05:45:18 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/3583095774' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  8 05:45:18 np0005475493 ceph-mon[73572]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ef552a3d-427a-4a30-bf26-d668cd69b923"}]': finished
Oct  8 05:45:18 np0005475493 systemd[1]: var-lib-containers-storage-overlay-0f4ea1e12f056d3b91ef60f321c7ad850cfc852d00f72939f9b2cf1a60bb29e5-merged.mount: Deactivated successfully.
Oct  8 05:45:18 np0005475493 podman[84740]: 2025-10-08 09:45:18.578172534 +0000 UTC m=+0.496282845 container remove 4b56845d6cec65aa0a14424f55847cdea2e1e33e41bb814b2912afbad4552408 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_hellman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Oct  8 05:45:18 np0005475493 systemd[1]: libpod-conmon-4b56845d6cec65aa0a14424f55847cdea2e1e33e41bb814b2912afbad4552408.scope: Deactivated successfully.
Oct  8 05:45:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Oct  8 05:45:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/739388561' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  8 05:45:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e19 e19: 3 total, 2 up, 3 in
Oct  8 05:45:18 np0005475493 unruffled_lovelace[84758]: pool 'images' created
Oct  8 05:45:18 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 2 up, 3 in
Oct  8 05:45:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 19 pg[5.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  8 05:45:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  8 05:45:18 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  8 05:45:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 19 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:18 np0005475493 systemd[1]: libpod-7d505846ae5612903e0b8a8c43d8cd26f20dc21d4255cb6a6dfea69d5faa8726.scope: Deactivated successfully.
Oct  8 05:45:18 np0005475493 podman[84725]: 2025-10-08 09:45:18.635344426 +0000 UTC m=+0.587214407 container died 7d505846ae5612903e0b8a8c43d8cd26f20dc21d4255cb6a6dfea69d5faa8726 (image=quay.io/ceph/ceph:v19, name=unruffled_lovelace, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:45:18 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.ixicfj(active, since 116s), standbys: compute-2.mtagwx
Oct  8 05:45:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.mtagwx", "id": "compute-2.mtagwx"} v 0)
Oct  8 05:45:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.mtagwx", "id": "compute-2.mtagwx"}]: dispatch
Oct  8 05:45:18 np0005475493 systemd[1]: var-lib-containers-storage-overlay-f3ea1dcf3947b191752b402681f7d6449f6b28beb27563da3223d9f76e6079bb-merged.mount: Deactivated successfully.
Oct  8 05:45:18 np0005475493 podman[84725]: 2025-10-08 09:45:18.669996862 +0000 UTC m=+0.621866823 container remove 7d505846ae5612903e0b8a8c43d8cd26f20dc21d4255cb6a6dfea69d5faa8726 (image=quay.io/ceph/ceph:v19, name=unruffled_lovelace, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  8 05:45:18 np0005475493 systemd[1]: libpod-conmon-7d505846ae5612903e0b8a8c43d8cd26f20dc21d4255cb6a6dfea69d5faa8726.scope: Deactivated successfully.
Oct  8 05:45:18 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v70: 5 pgs: 1 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:45:18 np0005475493 python3[84896]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:19 np0005475493 podman[84910]: 2025-10-08 09:45:19.036568826 +0000 UTC m=+0.060427307 container create 75d3a28e34d60bdb72aa7e5306de8d388df23341c805a21e03cae970b82280aa (image=quay.io/ceph/ceph:v19, name=nostalgic_haslett, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:45:19 np0005475493 systemd[1]: Started libpod-conmon-75d3a28e34d60bdb72aa7e5306de8d388df23341c805a21e03cae970b82280aa.scope.
Oct  8 05:45:19 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89192ae0dfc04ecf6a8022a8af42fae7a40dabc1cf6773eaec6bbf0b22324e35/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89192ae0dfc04ecf6a8022a8af42fae7a40dabc1cf6773eaec6bbf0b22324e35/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:19 np0005475493 podman[84910]: 2025-10-08 09:45:19.005396313 +0000 UTC m=+0.029254814 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:19 np0005475493 podman[84910]: 2025-10-08 09:45:19.116599296 +0000 UTC m=+0.140457797 container init 75d3a28e34d60bdb72aa7e5306de8d388df23341c805a21e03cae970b82280aa (image=quay.io/ceph/ceph:v19, name=nostalgic_haslett, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:45:19 np0005475493 podman[84910]: 2025-10-08 09:45:19.122411947 +0000 UTC m=+0.146270428 container start 75d3a28e34d60bdb72aa7e5306de8d388df23341c805a21e03cae970b82280aa (image=quay.io/ceph/ceph:v19, name=nostalgic_haslett, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  8 05:45:19 np0005475493 podman[84910]: 2025-10-08 09:45:19.140402073 +0000 UTC m=+0.164260574 container attach 75d3a28e34d60bdb72aa7e5306de8d388df23341c805a21e03cae970b82280aa (image=quay.io/ceph/ceph:v19, name=nostalgic_haslett, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  8 05:45:19 np0005475493 podman[84952]: 2025-10-08 09:45:19.21795161 +0000 UTC m=+0.051651134 container create 8eaf731249ed346d856e388608bd702da756ea516b5a5c48ffb081395391b24d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_allen, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:45:19 np0005475493 systemd[1]: Started libpod-conmon-8eaf731249ed346d856e388608bd702da756ea516b5a5c48ffb081395391b24d.scope.
Oct  8 05:45:19 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:19 np0005475493 podman[84952]: 2025-10-08 09:45:19.189090083 +0000 UTC m=+0.022789627 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:45:19 np0005475493 podman[84952]: 2025-10-08 09:45:19.286109986 +0000 UTC m=+0.119809540 container init 8eaf731249ed346d856e388608bd702da756ea516b5a5c48ffb081395391b24d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_allen, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct  8 05:45:19 np0005475493 podman[84952]: 2025-10-08 09:45:19.291360694 +0000 UTC m=+0.125060218 container start 8eaf731249ed346d856e388608bd702da756ea516b5a5c48ffb081395391b24d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_allen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  8 05:45:19 np0005475493 hungry_allen[84987]: 167 167
Oct  8 05:45:19 np0005475493 systemd[1]: libpod-8eaf731249ed346d856e388608bd702da756ea516b5a5c48ffb081395391b24d.scope: Deactivated successfully.
Oct  8 05:45:19 np0005475493 podman[84952]: 2025-10-08 09:45:19.310153594 +0000 UTC m=+0.143853128 container attach 8eaf731249ed346d856e388608bd702da756ea516b5a5c48ffb081395391b24d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_allen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:45:19 np0005475493 podman[84952]: 2025-10-08 09:45:19.310607812 +0000 UTC m=+0.144307336 container died 8eaf731249ed346d856e388608bd702da756ea516b5a5c48ffb081395391b24d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_allen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:45:19 np0005475493 systemd[1]: var-lib-containers-storage-overlay-93b79c101f3f252014f4067458e9d9cbe67b1d994dfd90d6b6194f70c158d11b-merged.mount: Deactivated successfully.
Oct  8 05:45:19 np0005475493 podman[84952]: 2025-10-08 09:45:19.35610839 +0000 UTC m=+0.189807914 container remove 8eaf731249ed346d856e388608bd702da756ea516b5a5c48ffb081395391b24d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_allen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  8 05:45:19 np0005475493 systemd[1]: libpod-conmon-8eaf731249ed346d856e388608bd702da756ea516b5a5c48ffb081395391b24d.scope: Deactivated successfully.
Oct  8 05:45:19 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.swlvov started
Oct  8 05:45:19 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from mgr.compute-1.swlvov 192.168.122.101:0/1376433089; not ready for session (expect reconnect)
Oct  8 05:45:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct  8 05:45:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/672510145' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  8 05:45:19 np0005475493 podman[85014]: 2025-10-08 09:45:19.552582279 +0000 UTC m=+0.041745073 container create 34685d558f39947bf4f332ec16fe3a847209940724badf98c27bb0b7dfe1debc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid)
Oct  8 05:45:19 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/739388561' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  8 05:45:19 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/739388561' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  8 05:45:19 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/672510145' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  8 05:45:19 np0005475493 systemd[1]: Started libpod-conmon-34685d558f39947bf4f332ec16fe3a847209940724badf98c27bb0b7dfe1debc.scope.
Oct  8 05:45:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Oct  8 05:45:19 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4271d283ac9c99cade98f1d643acac068002fd2539aff20fc00e4d9fdda3926/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4271d283ac9c99cade98f1d643acac068002fd2539aff20fc00e4d9fdda3926/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4271d283ac9c99cade98f1d643acac068002fd2539aff20fc00e4d9fdda3926/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4271d283ac9c99cade98f1d643acac068002fd2539aff20fc00e4d9fdda3926/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:19 np0005475493 podman[85014]: 2025-10-08 09:45:19.535644526 +0000 UTC m=+0.024807330 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:45:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/672510145' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  8 05:45:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e20 e20: 3 total, 2 up, 3 in
Oct  8 05:45:19 np0005475493 nostalgic_haslett[84948]: pool 'cephfs.cephfs.meta' created
Oct  8 05:45:19 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 2 up, 3 in
Oct  8 05:45:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  8 05:45:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  8 05:45:19 np0005475493 podman[85014]: 2025-10-08 09:45:19.643138765 +0000 UTC m=+0.132301569 container init 34685d558f39947bf4f332ec16fe3a847209940724badf98c27bb0b7dfe1debc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_chaplygin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct  8 05:45:19 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  8 05:45:19 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 20 pg[6.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:19 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 20 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:19 np0005475493 podman[85014]: 2025-10-08 09:45:19.653089447 +0000 UTC m=+0.142252241 container start 34685d558f39947bf4f332ec16fe3a847209940724badf98c27bb0b7dfe1debc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  8 05:45:19 np0005475493 podman[85014]: 2025-10-08 09:45:19.657542552 +0000 UTC m=+0.146705326 container attach 34685d558f39947bf4f332ec16fe3a847209940724badf98c27bb0b7dfe1debc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_chaplygin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  8 05:45:19 np0005475493 systemd[1]: libpod-75d3a28e34d60bdb72aa7e5306de8d388df23341c805a21e03cae970b82280aa.scope: Deactivated successfully.
Oct  8 05:45:19 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.ixicfj(active, since 117s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct  8 05:45:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.swlvov", "id": "compute-1.swlvov"} v 0)
Oct  8 05:45:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.swlvov", "id": "compute-1.swlvov"}]: dispatch
Oct  8 05:45:19 np0005475493 podman[85036]: 2025-10-08 09:45:19.698122475 +0000 UTC m=+0.023711815 container died 75d3a28e34d60bdb72aa7e5306de8d388df23341c805a21e03cae970b82280aa (image=quay.io/ceph/ceph:v19, name=nostalgic_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:45:19 np0005475493 systemd[1]: var-lib-containers-storage-overlay-89192ae0dfc04ecf6a8022a8af42fae7a40dabc1cf6773eaec6bbf0b22324e35-merged.mount: Deactivated successfully.
Oct  8 05:45:19 np0005475493 podman[85036]: 2025-10-08 09:45:19.747402079 +0000 UTC m=+0.072991409 container remove 75d3a28e34d60bdb72aa7e5306de8d388df23341c805a21e03cae970b82280aa (image=quay.io/ceph/ceph:v19, name=nostalgic_haslett, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Oct  8 05:45:19 np0005475493 systemd[1]: libpod-conmon-75d3a28e34d60bdb72aa7e5306de8d388df23341c805a21e03cae970b82280aa.scope: Deactivated successfully.
Oct  8 05:45:20 np0005475493 python3[85093]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:20 np0005475493 podman[85125]: 2025-10-08 09:45:20.159893658 +0000 UTC m=+0.060313012 container create 29f5911a8d96b462472199027dd8ebed7b8804e74f5e59c9a16d8804bde2f33e (image=quay.io/ceph/ceph:v19, name=jolly_fermat, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  8 05:45:20 np0005475493 systemd[1]: Started libpod-conmon-29f5911a8d96b462472199027dd8ebed7b8804e74f5e59c9a16d8804bde2f33e.scope.
Oct  8 05:45:20 np0005475493 podman[85125]: 2025-10-08 09:45:20.123046269 +0000 UTC m=+0.023465643 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:20 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:20 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b60ec70a6c93318129c7e2e07fcaa9f9ff8097b95e2fc9495498aed930daa9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:20 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b60ec70a6c93318129c7e2e07fcaa9f9ff8097b95e2fc9495498aed930daa9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:20 np0005475493 podman[85125]: 2025-10-08 09:45:20.272279499 +0000 UTC m=+0.172698873 container init 29f5911a8d96b462472199027dd8ebed7b8804e74f5e59c9a16d8804bde2f33e (image=quay.io/ceph/ceph:v19, name=jolly_fermat, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  8 05:45:20 np0005475493 lvm[85163]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 05:45:20 np0005475493 podman[85125]: 2025-10-08 09:45:20.279104902 +0000 UTC m=+0.179524256 container start 29f5911a8d96b462472199027dd8ebed7b8804e74f5e59c9a16d8804bde2f33e (image=quay.io/ceph/ceph:v19, name=jolly_fermat, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:45:20 np0005475493 lvm[85163]: VG ceph_vg0 finished
Oct  8 05:45:20 np0005475493 podman[85125]: 2025-10-08 09:45:20.293781801 +0000 UTC m=+0.194201165 container attach 29f5911a8d96b462472199027dd8ebed7b8804e74f5e59c9a16d8804bde2f33e (image=quay.io/ceph/ceph:v19, name=jolly_fermat, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:45:20 np0005475493 busy_chaplygin[85030]: {}
Oct  8 05:45:20 np0005475493 systemd[1]: libpod-34685d558f39947bf4f332ec16fe3a847209940724badf98c27bb0b7dfe1debc.scope: Deactivated successfully.
Oct  8 05:45:20 np0005475493 podman[85014]: 2025-10-08 09:45:20.360823521 +0000 UTC m=+0.849986295 container died 34685d558f39947bf4f332ec16fe3a847209940724badf98c27bb0b7dfe1debc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_chaplygin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  8 05:45:20 np0005475493 systemd[1]: libpod-34685d558f39947bf4f332ec16fe3a847209940724badf98c27bb0b7dfe1debc.scope: Consumed 1.134s CPU time.
Oct  8 05:45:20 np0005475493 systemd[1]: var-lib-containers-storage-overlay-e4271d283ac9c99cade98f1d643acac068002fd2539aff20fc00e4d9fdda3926-merged.mount: Deactivated successfully.
Oct  8 05:45:20 np0005475493 podman[85014]: 2025-10-08 09:45:20.561108199 +0000 UTC m=+1.050270973 container remove 34685d558f39947bf4f332ec16fe3a847209940724badf98c27bb0b7dfe1debc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_chaplygin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:45:20 np0005475493 systemd[1]: libpod-conmon-34685d558f39947bf4f332ec16fe3a847209940724badf98c27bb0b7dfe1debc.scope: Deactivated successfully.
Oct  8 05:45:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:45:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct  8 05:45:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2341319636' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  8 05:45:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Oct  8 05:45:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:45:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2341319636' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  8 05:45:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e21 e21: 3 total, 2 up, 3 in
Oct  8 05:45:20 np0005475493 jolly_fermat[85159]: pool 'cephfs.cephfs.data' created
Oct  8 05:45:20 np0005475493 systemd[1]: libpod-29f5911a8d96b462472199027dd8ebed7b8804e74f5e59c9a16d8804bde2f33e.scope: Deactivated successfully.
Oct  8 05:45:20 np0005475493 podman[85125]: 2025-10-08 09:45:20.692754499 +0000 UTC m=+0.593173863 container died 29f5911a8d96b462472199027dd8ebed7b8804e74f5e59c9a16d8804bde2f33e (image=quay.io/ceph/ceph:v19, name=jolly_fermat, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:45:20 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 2 up, 3 in
Oct  8 05:45:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  8 05:45:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  8 05:45:20 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  8 05:45:20 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/672510145' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  8 05:45:20 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/2341319636' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  8 05:45:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 21 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:20 np0005475493 systemd[1]: var-lib-containers-storage-overlay-12b60ec70a6c93318129c7e2e07fcaa9f9ff8097b95e2fc9495498aed930daa9-merged.mount: Deactivated successfully.
Oct  8 05:45:20 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 3 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:45:20 np0005475493 podman[85125]: 2025-10-08 09:45:20.981159321 +0000 UTC m=+0.881578675 container remove 29f5911a8d96b462472199027dd8ebed7b8804e74f5e59c9a16d8804bde2f33e (image=quay.io/ceph/ceph:v19, name=jolly_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  8 05:45:21 np0005475493 systemd[1]: libpod-conmon-29f5911a8d96b462472199027dd8ebed7b8804e74f5e59c9a16d8804bde2f33e.scope: Deactivated successfully.
Oct  8 05:45:21 np0005475493 python3[85238]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:21 np0005475493 podman[85239]: 2025-10-08 09:45:21.377549681 +0000 UTC m=+0.038583551 container create dc4df8fd2543f71b3b440e888874b878eae57a364be10a9b769b467b7698c396 (image=quay.io/ceph/ceph:v19, name=hardcore_dubinsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:45:21 np0005475493 systemd[1]: Started libpod-conmon-dc4df8fd2543f71b3b440e888874b878eae57a364be10a9b769b467b7698c396.scope.
Oct  8 05:45:21 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:21 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44944ed71359d63dc9071dd6ac82d0fd7db9863075a14b665dc8661bf6ab239a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:21 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44944ed71359d63dc9071dd6ac82d0fd7db9863075a14b665dc8661bf6ab239a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:21 np0005475493 podman[85239]: 2025-10-08 09:45:21.36064172 +0000 UTC m=+0.021675630 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:21 np0005475493 podman[85239]: 2025-10-08 09:45:21.458164195 +0000 UTC m=+0.119198065 container init dc4df8fd2543f71b3b440e888874b878eae57a364be10a9b769b467b7698c396 (image=quay.io/ceph/ceph:v19, name=hardcore_dubinsky, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:45:21 np0005475493 podman[85239]: 2025-10-08 09:45:21.463877022 +0000 UTC m=+0.124910922 container start dc4df8fd2543f71b3b440e888874b878eae57a364be10a9b769b467b7698c396 (image=quay.io/ceph/ceph:v19, name=hardcore_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  8 05:45:21 np0005475493 podman[85239]: 2025-10-08 09:45:21.467496632 +0000 UTC m=+0.128530522 container attach dc4df8fd2543f71b3b440e888874b878eae57a364be10a9b769b467b7698c396 (image=quay.io/ceph/ceph:v19, name=hardcore_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:45:21 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Oct  8 05:45:21 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3377256593' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct  8 05:45:21 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Oct  8 05:45:21 np0005475493 ceph-mon[73572]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  8 05:45:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3377256593' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct  8 05:45:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e22 e22: 3 total, 2 up, 3 in
Oct  8 05:45:22 np0005475493 hardcore_dubinsky[85254]: enabled application 'rbd' on pool 'vms'
Oct  8 05:45:22 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 2 up, 3 in
Oct  8 05:45:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  8 05:45:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  8 05:45:22 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:22 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/2341319636' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  8 05:45:22 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:22 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/3377256593' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct  8 05:45:22 np0005475493 systemd[1]: libpod-dc4df8fd2543f71b3b440e888874b878eae57a364be10a9b769b467b7698c396.scope: Deactivated successfully.
Oct  8 05:45:22 np0005475493 podman[85239]: 2025-10-08 09:45:22.175383772 +0000 UTC m=+0.836417642 container died dc4df8fd2543f71b3b440e888874b878eae57a364be10a9b769b467b7698c396 (image=quay.io/ceph/ceph:v19, name=hardcore_dubinsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:45:22 np0005475493 systemd[1]: var-lib-containers-storage-overlay-44944ed71359d63dc9071dd6ac82d0fd7db9863075a14b665dc8661bf6ab239a-merged.mount: Deactivated successfully.
Oct  8 05:45:22 np0005475493 podman[85239]: 2025-10-08 09:45:22.230592742 +0000 UTC m=+0.891626662 container remove dc4df8fd2543f71b3b440e888874b878eae57a364be10a9b769b467b7698c396 (image=quay.io/ceph/ceph:v19, name=hardcore_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  8 05:45:22 np0005475493 systemd[1]: libpod-conmon-dc4df8fd2543f71b3b440e888874b878eae57a364be10a9b769b467b7698c396.scope: Deactivated successfully.
Oct  8 05:45:22 np0005475493 python3[85317]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:22 np0005475493 podman[85318]: 2025-10-08 09:45:22.54709939 +0000 UTC m=+0.038296129 container create 77c9f890363acfdf472e9345e10a2920ea96fe98762916c37854bd2dcd9f1a5c (image=quay.io/ceph/ceph:v19, name=silly_shirley, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:45:22
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [balancer INFO root] Some PGs (0.428571) are unknown; try again later
Oct  8 05:45:22 np0005475493 systemd[1]: Started libpod-conmon-77c9f890363acfdf472e9345e10a2920ea96fe98762916c37854bd2dcd9f1a5c.scope.
Oct  8 05:45:22 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  8 05:45:22 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9fcab9e8508930760cd4e04bb79930f01a7fb7d5b61d20a8688ce85c292e4ca/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Oct  8 05:45:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct  8 05:45:22 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9fcab9e8508930760cd4e04bb79930f01a7fb7d5b61d20a8688ce85c292e4ca/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:45:22 np0005475493 podman[85318]: 2025-10-08 09:45:22.604805754 +0000 UTC m=+0.096002523 container init 77c9f890363acfdf472e9345e10a2920ea96fe98762916c37854bd2dcd9f1a5c (image=quay.io/ceph/ceph:v19, name=silly_shirley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:45:22 np0005475493 podman[85318]: 2025-10-08 09:45:22.617431907 +0000 UTC m=+0.108628646 container start 77c9f890363acfdf472e9345e10a2920ea96fe98762916c37854bd2dcd9f1a5c (image=quay.io/ceph/ceph:v19, name=silly_shirley, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  8 05:45:22 np0005475493 podman[85318]: 2025-10-08 09:45:22.622593521 +0000 UTC m=+0.113790280 container attach 77c9f890363acfdf472e9345e10a2920ea96fe98762916c37854bd2dcd9f1a5c (image=quay.io/ceph/ceph:v19, name=silly_shirley, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:45:22 np0005475493 podman[85318]: 2025-10-08 09:45:22.531264573 +0000 UTC m=+0.022461322 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:22 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:45:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Oct  8 05:45:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1601367079' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct  8 05:45:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Oct  8 05:45:23 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct  8 05:45:23 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1601367079' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct  8 05:45:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e23 e23: 3 total, 2 up, 3 in
Oct  8 05:45:23 np0005475493 silly_shirley[85334]: enabled application 'rbd' on pool 'volumes'
Oct  8 05:45:23 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 2 up, 3 in
Oct  8 05:45:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  8 05:45:23 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  8 05:45:23 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  8 05:45:23 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev 0ec7ed32-6b33-4f8f-9254-a63145d84250 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct  8 05:45:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Oct  8 05:45:23 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct  8 05:45:23 np0005475493 systemd[1]: libpod-77c9f890363acfdf472e9345e10a2920ea96fe98762916c37854bd2dcd9f1a5c.scope: Deactivated successfully.
Oct  8 05:45:23 np0005475493 podman[85318]: 2025-10-08 09:45:23.182594828 +0000 UTC m=+0.673791567 container died 77c9f890363acfdf472e9345e10a2920ea96fe98762916c37854bd2dcd9f1a5c (image=quay.io/ceph/ceph:v19, name=silly_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  8 05:45:23 np0005475493 ceph-mon[73572]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  8 05:45:23 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/3377256593' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct  8 05:45:23 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct  8 05:45:23 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/1601367079' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct  8 05:45:23 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct  8 05:45:23 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/1601367079' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct  8 05:45:23 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct  8 05:45:23 np0005475493 systemd[1]: var-lib-containers-storage-overlay-a9fcab9e8508930760cd4e04bb79930f01a7fb7d5b61d20a8688ce85c292e4ca-merged.mount: Deactivated successfully.
Oct  8 05:45:23 np0005475493 podman[85318]: 2025-10-08 09:45:23.222695691 +0000 UTC m=+0.713892440 container remove 77c9f890363acfdf472e9345e10a2920ea96fe98762916c37854bd2dcd9f1a5c (image=quay.io/ceph/ceph:v19, name=silly_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  8 05:45:23 np0005475493 systemd[1]: libpod-conmon-77c9f890363acfdf472e9345e10a2920ea96fe98762916c37854bd2dcd9f1a5c.scope: Deactivated successfully.
Oct  8 05:45:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:45:23 np0005475493 python3[85395]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:23 np0005475493 podman[85396]: 2025-10-08 09:45:23.571424185 +0000 UTC m=+0.055440891 container create a69f749ee1e24f25db88f0e02e5195db0fd57eefd8df786544d9db483af35fae (image=quay.io/ceph/ceph:v19, name=angry_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:45:23 np0005475493 systemd[1]: Started libpod-conmon-a69f749ee1e24f25db88f0e02e5195db0fd57eefd8df786544d9db483af35fae.scope.
Oct  8 05:45:23 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9278d57ff5d76926e8baab6f379af1aee34be2c3fe3af7006cc8853ee315687/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9278d57ff5d76926e8baab6f379af1aee34be2c3fe3af7006cc8853ee315687/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:23 np0005475493 podman[85396]: 2025-10-08 09:45:23.546201148 +0000 UTC m=+0.030217934 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:23 np0005475493 podman[85396]: 2025-10-08 09:45:23.646364964 +0000 UTC m=+0.130381710 container init a69f749ee1e24f25db88f0e02e5195db0fd57eefd8df786544d9db483af35fae (image=quay.io/ceph/ceph:v19, name=angry_poincare, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:45:23 np0005475493 podman[85396]: 2025-10-08 09:45:23.656254043 +0000 UTC m=+0.140270759 container start a69f749ee1e24f25db88f0e02e5195db0fd57eefd8df786544d9db483af35fae (image=quay.io/ceph/ceph:v19, name=angry_poincare, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:45:23 np0005475493 podman[85396]: 2025-10-08 09:45:23.660650356 +0000 UTC m=+0.144667112 container attach a69f749ee1e24f25db88f0e02e5195db0fd57eefd8df786544d9db483af35fae (image=quay.io/ceph/ceph:v19, name=angry_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  8 05:45:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Oct  8 05:45:23 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct  8 05:45:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:45:23 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:45:23 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Oct  8 05:45:23 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Oct  8 05:45:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Oct  8 05:45:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/841109346' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct  8 05:45:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Oct  8 05:45:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct  8 05:45:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/841109346' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct  8 05:45:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e24 e24: 3 total, 2 up, 3 in
Oct  8 05:45:24 np0005475493 angry_poincare[85411]: enabled application 'rbd' on pool 'backups'
Oct  8 05:45:24 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 2 up, 3 in
Oct  8 05:45:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  8 05:45:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  8 05:45:24 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  8 05:45:24 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev 2227baea-d11a-4cde-b678-995960ba9c5f (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct  8 05:45:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Oct  8 05:45:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct  8 05:45:24 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct  8 05:45:24 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/841109346' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct  8 05:45:24 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct  8 05:45:24 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/841109346' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct  8 05:45:24 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct  8 05:45:24 np0005475493 systemd[1]: libpod-a69f749ee1e24f25db88f0e02e5195db0fd57eefd8df786544d9db483af35fae.scope: Deactivated successfully.
Oct  8 05:45:24 np0005475493 conmon[85411]: conmon a69f749ee1e24f25db88 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a69f749ee1e24f25db88f0e02e5195db0fd57eefd8df786544d9db483af35fae.scope/container/memory.events
Oct  8 05:45:24 np0005475493 podman[85396]: 2025-10-08 09:45:24.223050062 +0000 UTC m=+0.707066768 container died a69f749ee1e24f25db88f0e02e5195db0fd57eefd8df786544d9db483af35fae (image=quay.io/ceph/ceph:v19, name=angry_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:45:24 np0005475493 systemd[1]: var-lib-containers-storage-overlay-c9278d57ff5d76926e8baab6f379af1aee34be2c3fe3af7006cc8853ee315687-merged.mount: Deactivated successfully.
Oct  8 05:45:24 np0005475493 podman[85396]: 2025-10-08 09:45:24.274474505 +0000 UTC m=+0.758491231 container remove a69f749ee1e24f25db88f0e02e5195db0fd57eefd8df786544d9db483af35fae (image=quay.io/ceph/ceph:v19, name=angry_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:45:24 np0005475493 systemd[1]: libpod-conmon-a69f749ee1e24f25db88f0e02e5195db0fd57eefd8df786544d9db483af35fae.scope: Deactivated successfully.
Oct  8 05:45:24 np0005475493 python3[85473]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:24 np0005475493 podman[85474]: 2025-10-08 09:45:24.64083132 +0000 UTC m=+0.054354775 container create 0679de31226ac9e167841135cb206f8d383fb43893d6304920b9081069047b84 (image=quay.io/ceph/ceph:v19, name=lucid_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:45:24 np0005475493 systemd[1]: Started libpod-conmon-0679de31226ac9e167841135cb206f8d383fb43893d6304920b9081069047b84.scope.
Oct  8 05:45:24 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9cabda36346f1107ec927bf1612ff44e61279f47c52c081cbfd5192517acff2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9cabda36346f1107ec927bf1612ff44e61279f47c52c081cbfd5192517acff2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:24 np0005475493 podman[85474]: 2025-10-08 09:45:24.705971292 +0000 UTC m=+0.119494827 container init 0679de31226ac9e167841135cb206f8d383fb43893d6304920b9081069047b84 (image=quay.io/ceph/ceph:v19, name=lucid_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:45:24 np0005475493 podman[85474]: 2025-10-08 09:45:24.614174174 +0000 UTC m=+0.027697719 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:24 np0005475493 podman[85474]: 2025-10-08 09:45:24.712841947 +0000 UTC m=+0.126365432 container start 0679de31226ac9e167841135cb206f8d383fb43893d6304920b9081069047b84 (image=quay.io/ceph/ceph:v19, name=lucid_hellman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:45:24 np0005475493 podman[85474]: 2025-10-08 09:45:24.71580041 +0000 UTC m=+0.129323875 container attach 0679de31226ac9e167841135cb206f8d383fb43893d6304920b9081069047b84 (image=quay.io/ceph/ceph:v19, name=lucid_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:45:24 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:45:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Oct  8 05:45:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  8 05:45:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Oct  8 05:45:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  8 05:45:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Oct  8 05:45:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2076445319' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct  8 05:45:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Oct  8 05:45:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct  8 05:45:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct  8 05:45:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct  8 05:45:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2076445319' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct  8 05:45:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e25 e25: 3 total, 2 up, 3 in
Oct  8 05:45:25 np0005475493 lucid_hellman[85489]: enabled application 'rbd' on pool 'images'
Oct  8 05:45:25 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 2 up, 3 in
Oct  8 05:45:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  8 05:45:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  8 05:45:25 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev 05a1d3c6-ed35-41d2-9081-50c37b873654 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct  8 05:45:25 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  8 05:45:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Oct  8 05:45:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct  8 05:45:25 np0005475493 ceph-mon[73572]: Deploying daemon osd.2 on compute-2
Oct  8 05:45:25 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  8 05:45:25 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  8 05:45:25 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/2076445319' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct  8 05:45:25 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct  8 05:45:25 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct  8 05:45:25 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct  8 05:45:25 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/2076445319' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct  8 05:45:25 np0005475493 systemd[1]: libpod-0679de31226ac9e167841135cb206f8d383fb43893d6304920b9081069047b84.scope: Deactivated successfully.
Oct  8 05:45:25 np0005475493 podman[85474]: 2025-10-08 09:45:25.226793283 +0000 UTC m=+0.640316808 container died 0679de31226ac9e167841135cb206f8d383fb43893d6304920b9081069047b84 (image=quay.io/ceph/ceph:v19, name=lucid_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  8 05:45:25 np0005475493 systemd[1]: var-lib-containers-storage-overlay-c9cabda36346f1107ec927bf1612ff44e61279f47c52c081cbfd5192517acff2-merged.mount: Deactivated successfully.
Oct  8 05:45:25 np0005475493 podman[85474]: 2025-10-08 09:45:25.270454295 +0000 UTC m=+0.683977750 container remove 0679de31226ac9e167841135cb206f8d383fb43893d6304920b9081069047b84 (image=quay.io/ceph/ceph:v19, name=lucid_hellman, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  8 05:45:25 np0005475493 systemd[1]: libpod-conmon-0679de31226ac9e167841135cb206f8d383fb43893d6304920b9081069047b84.scope: Deactivated successfully.
Oct  8 05:45:25 np0005475493 python3[85550]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:25 np0005475493 podman[85551]: 2025-10-08 09:45:25.678196586 +0000 UTC m=+0.040447899 container create f9dba830df8e8c5d1edfa23c8edf504d296ea3b1f3860bd37a5c70289b21d492 (image=quay.io/ceph/ceph:v19, name=vibrant_lamport, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  8 05:45:25 np0005475493 systemd[1]: Started libpod-conmon-f9dba830df8e8c5d1edfa23c8edf504d296ea3b1f3860bd37a5c70289b21d492.scope.
Oct  8 05:45:25 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:25 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/962d2b9e85bf9e67d3ef657038b6e595d92cfd7c44ff43c90edfc8534bdb7ab6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:25 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/962d2b9e85bf9e67d3ef657038b6e595d92cfd7c44ff43c90edfc8534bdb7ab6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:25 np0005475493 podman[85551]: 2025-10-08 09:45:25.733972059 +0000 UTC m=+0.096223372 container init f9dba830df8e8c5d1edfa23c8edf504d296ea3b1f3860bd37a5c70289b21d492 (image=quay.io/ceph/ceph:v19, name=vibrant_lamport, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  8 05:45:25 np0005475493 podman[85551]: 2025-10-08 09:45:25.739471808 +0000 UTC m=+0.101723121 container start f9dba830df8e8c5d1edfa23c8edf504d296ea3b1f3860bd37a5c70289b21d492 (image=quay.io/ceph/ceph:v19, name=vibrant_lamport, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:45:25 np0005475493 podman[85551]: 2025-10-08 09:45:25.742490213 +0000 UTC m=+0.104741526 container attach f9dba830df8e8c5d1edfa23c8edf504d296ea3b1f3860bd37a5c70289b21d492 (image=quay.io/ceph/ceph:v19, name=vibrant_lamport, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  8 05:45:25 np0005475493 podman[85551]: 2025-10-08 09:45:25.660988952 +0000 UTC m=+0.023240285 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Oct  8 05:45:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3506179030' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct  8 05:45:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Oct  8 05:45:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct  8 05:45:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3506179030' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct  8 05:45:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e26 e26: 3 total, 2 up, 3 in
Oct  8 05:45:26 np0005475493 vibrant_lamport[85566]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Oct  8 05:45:26 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 2 up, 3 in
Oct  8 05:45:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  8 05:45:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  8 05:45:26 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  8 05:45:26 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev d5e796e6-991f-4eb6-8371-6de3595026e8 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct  8 05:45:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0)
Oct  8 05:45:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  8 05:45:26 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct  8 05:45:26 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/3506179030' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct  8 05:45:26 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct  8 05:45:26 np0005475493 systemd[1]: libpod-f9dba830df8e8c5d1edfa23c8edf504d296ea3b1f3860bd37a5c70289b21d492.scope: Deactivated successfully.
Oct  8 05:45:26 np0005475493 podman[85551]: 2025-10-08 09:45:26.236256692 +0000 UTC m=+0.598508005 container died f9dba830df8e8c5d1edfa23c8edf504d296ea3b1f3860bd37a5c70289b21d492 (image=quay.io/ceph/ceph:v19, name=vibrant_lamport, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:45:26 np0005475493 systemd[1]: var-lib-containers-storage-overlay-962d2b9e85bf9e67d3ef657038b6e595d92cfd7c44ff43c90edfc8534bdb7ab6-merged.mount: Deactivated successfully.
Oct  8 05:45:26 np0005475493 podman[85551]: 2025-10-08 09:45:26.274281258 +0000 UTC m=+0.636532571 container remove f9dba830df8e8c5d1edfa23c8edf504d296ea3b1f3860bd37a5c70289b21d492 (image=quay.io/ceph/ceph:v19, name=vibrant_lamport, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:45:26 np0005475493 systemd[1]: libpod-conmon-f9dba830df8e8c5d1edfa23c8edf504d296ea3b1f3860bd37a5c70289b21d492.scope: Deactivated successfully.
Oct  8 05:45:26 np0005475493 python3[85629]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:26 np0005475493 ceph-mgr[73869]: [progress WARNING root] Starting Global Recovery Event,63 pgs not in active + clean state
Oct  8 05:45:26 np0005475493 podman[85630]: 2025-10-08 09:45:26.630314316 +0000 UTC m=+0.044260426 container create c1639a0c2f220a0cf3b22296c2d07390db8852769a968d66af2c21f03fab86d7 (image=quay.io/ceph/ceph:v19, name=confident_meitner, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:45:26 np0005475493 systemd[1]: Started libpod-conmon-c1639a0c2f220a0cf3b22296c2d07390db8852769a968d66af2c21f03fab86d7.scope.
Oct  8 05:45:26 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:26 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cb256490d7ed2d4db6de2eec988bcd199b9e95b6ff041938f8df0243d7e184d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:26 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cb256490d7ed2d4db6de2eec988bcd199b9e95b6ff041938f8df0243d7e184d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:26 np0005475493 podman[85630]: 2025-10-08 09:45:26.610368339 +0000 UTC m=+0.024314519 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:26 np0005475493 podman[85630]: 2025-10-08 09:45:26.714867883 +0000 UTC m=+0.128813993 container init c1639a0c2f220a0cf3b22296c2d07390db8852769a968d66af2c21f03fab86d7 (image=quay.io/ceph/ceph:v19, name=confident_meitner, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct  8 05:45:26 np0005475493 podman[85630]: 2025-10-08 09:45:26.724692731 +0000 UTC m=+0.138638831 container start c1639a0c2f220a0cf3b22296c2d07390db8852769a968d66af2c21f03fab86d7 (image=quay.io/ceph/ceph:v19, name=confident_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:45:26 np0005475493 podman[85630]: 2025-10-08 09:45:26.727840331 +0000 UTC m=+0.141786471 container attach c1639a0c2f220a0cf3b22296c2d07390db8852769a968d66af2c21f03fab86d7 (image=quay.io/ceph/ceph:v19, name=confident_meitner, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  8 05:45:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v81: 69 pgs: 1 peering, 62 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:45:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Oct  8 05:45:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  8 05:45:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Oct  8 05:45:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  8 05:45:27 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Oct  8 05:45:27 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3314825613' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct  8 05:45:27 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Oct  8 05:45:27 np0005475493 ceph-mon[73572]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  8 05:45:27 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Oct  8 05:45:27 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct  8 05:45:27 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct  8 05:45:27 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3314825613' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct  8 05:45:27 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e27 e27: 3 total, 2 up, 3 in
Oct  8 05:45:27 np0005475493 confident_meitner[85646]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Oct  8 05:45:27 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 2 up, 3 in
Oct  8 05:45:27 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  8 05:45:27 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  8 05:45:27 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  8 05:45:27 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev 203c02a5-7e6a-438b-b565-7702405c80f6 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Oct  8 05:45:27 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Oct  8 05:45:27 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct  8 05:45:27 np0005475493 systemd[1]: libpod-c1639a0c2f220a0cf3b22296c2d07390db8852769a968d66af2c21f03fab86d7.scope: Deactivated successfully.
Oct  8 05:45:27 np0005475493 conmon[85646]: conmon c1639a0c2f220a0cf3b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c1639a0c2f220a0cf3b22296c2d07390db8852769a968d66af2c21f03fab86d7.scope/container/memory.events
Oct  8 05:45:27 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/3506179030' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct  8 05:45:27 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  8 05:45:27 np0005475493 podman[85630]: 2025-10-08 09:45:27.384224696 +0000 UTC m=+0.798170816 container died c1639a0c2f220a0cf3b22296c2d07390db8852769a968d66af2c21f03fab86d7 (image=quay.io/ceph/ceph:v19, name=confident_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:45:27 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  8 05:45:27 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  8 05:45:27 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/3314825613' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct  8 05:45:27 np0005475493 systemd[1]: var-lib-containers-storage-overlay-4cb256490d7ed2d4db6de2eec988bcd199b9e95b6ff041938f8df0243d7e184d-merged.mount: Deactivated successfully.
Oct  8 05:45:27 np0005475493 podman[85630]: 2025-10-08 09:45:27.433412816 +0000 UTC m=+0.847358916 container remove c1639a0c2f220a0cf3b22296c2d07390db8852769a968d66af2c21f03fab86d7 (image=quay.io/ceph/ceph:v19, name=confident_meitner, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:45:27 np0005475493 systemd[1]: libpod-conmon-c1639a0c2f220a0cf3b22296c2d07390db8852769a968d66af2c21f03fab86d7.scope: Deactivated successfully.
Oct  8 05:45:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:45:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Oct  8 05:45:28 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct  8 05:45:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e28 e28: 3 total, 2 up, 3 in
Oct  8 05:45:28 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 2 up, 3 in
Oct  8 05:45:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  8 05:45:28 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  8 05:45:28 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  8 05:45:28 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev 086b694d-8d81-43a9-9ec1-a27dc45770c2 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct  8 05:45:28 np0005475493 ceph-mgr[73869]: [progress INFO root] complete: finished ev 0ec7ed32-6b33-4f8f-9254-a63145d84250 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct  8 05:45:28 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event 0ec7ed32-6b33-4f8f-9254-a63145d84250 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Oct  8 05:45:28 np0005475493 ceph-mgr[73869]: [progress INFO root] complete: finished ev 2227baea-d11a-4cde-b678-995960ba9c5f (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct  8 05:45:28 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event 2227baea-d11a-4cde-b678-995960ba9c5f (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Oct  8 05:45:28 np0005475493 ceph-mgr[73869]: [progress INFO root] complete: finished ev 05a1d3c6-ed35-41d2-9081-50c37b873654 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct  8 05:45:28 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event 05a1d3c6-ed35-41d2-9081-50c37b873654 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Oct  8 05:45:28 np0005475493 ceph-mgr[73869]: [progress INFO root] complete: finished ev d5e796e6-991f-4eb6-8371-6de3595026e8 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct  8 05:45:28 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event d5e796e6-991f-4eb6-8371-6de3595026e8 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Oct  8 05:45:28 np0005475493 ceph-mgr[73869]: [progress INFO root] complete: finished ev 203c02a5-7e6a-438b-b565-7702405c80f6 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Oct  8 05:45:28 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event 203c02a5-7e6a-438b-b565-7702405c80f6 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Oct  8 05:45:28 np0005475493 ceph-mgr[73869]: [progress INFO root] complete: finished ev 086b694d-8d81-43a9-9ec1-a27dc45770c2 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct  8 05:45:28 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event 086b694d-8d81-43a9-9ec1-a27dc45770c2 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Oct  8 05:45:28 np0005475493 ceph-mon[73572]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  8 05:45:28 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Oct  8 05:45:28 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct  8 05:45:28 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct  8 05:45:28 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/3314825613' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct  8 05:45:28 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct  8 05:45:28 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct  8 05:45:28 np0005475493 python3[85757]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 27 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=15.088713646s) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active pruub 71.723999023s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 25 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=25 pruub=13.063826561s) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active pruub 69.699134827s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 27 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=27 pruub=14.060723305s) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active pruub 70.696052551s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=25 pruub=13.063826561s) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown pruub 69.699134827s@ mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=27 pruub=14.060723305s) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown pruub 70.696052551s@ mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.11( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.12( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.13( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=15.088713646s) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown pruub 71.723999023s@ mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.15( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.16( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.14( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.17( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.18( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.d( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.e( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.1f( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.f( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.10( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.1b( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.1c( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.1d( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.1e( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.3( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.4( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.1( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.2( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.5( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.6( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.7( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.8( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.9( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.b( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.c( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.19( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.1a( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.10( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.11( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.1a( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.1b( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.12( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.13( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.1e( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.1f( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.14( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.15( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.c( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.d( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.e( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[3.a( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.f( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.16( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.17( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.1( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.2( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.3( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.4( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.5( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.6( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.7( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.8( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.18( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.19( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.9( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.a( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.b( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.1c( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[4.1d( empty local-lis/les=18/19 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.4( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.5( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.6( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.7( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.12( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.13( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.16( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.17( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.1a( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.2( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.3( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.a( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.b( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.c( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.d( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.1( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.e( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.f( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.10( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.11( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.15( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.1b( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.1c( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.8( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.9( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.18( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.19( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.14( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.1d( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.1e( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 28 pg[5.1f( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:28 np0005475493 python3[85828]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759916728.161193-33698-228603819106303/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:45:28 np0005475493 systemd[74898]: Starting Mark boot as successful...
Oct  8 05:45:28 np0005475493 systemd[74898]: Finished Mark boot as successful.
Oct  8 05:45:28 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v84: 131 pgs: 1 peering, 124 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:45:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Oct  8 05:45:28 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  8 05:45:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Oct  8 05:45:28 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  8 05:45:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:45:29 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:45:29 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Oct  8 05:45:29 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  8 05:45:29 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  8 05:45:29 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  8 05:45:29 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  8 05:45:29 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:29 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:29 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct  8 05:45:29 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  8 05:45:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e29 e29: 3 total, 2 up, 3 in
Oct  8 05:45:29 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 2 up, 3 in
Oct  8 05:45:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  8 05:45:29 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  8 05:45:29 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.471620560s) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 72.974006653s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.471620560s) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown pruub 72.974006653s@ mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.19( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.18( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.1f( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.1e( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.19( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.1d( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.1c( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.1b( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.1b( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.1a( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.1a( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.1c( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.1d( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.1d( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.1a( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.e( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.1c( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.9( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.f( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.f( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.8( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.18( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.2( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.e( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.3( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.1b( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.4( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.4( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.5( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.5( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.2( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.6( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.3( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.5( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.2( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.1( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.3( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.7( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.6( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.0( empty local-lis/les=27/29 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.7( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.1( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.7( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.1( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.0( empty local-lis/les=27/29 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.6( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.d( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.4( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.c( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.d( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.b( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.0( empty local-lis/les=25/29 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.b( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.a( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.a( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.d( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.9( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.b( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.e( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.c( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.8( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.f( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.9( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.17( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.a( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.16( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.16( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.11( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.10( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.17( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.12( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.15( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.14( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.c( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.14( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.13( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.13( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.12( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.14( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.12( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.13( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.11( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.15( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.16( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.10( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.10( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.8( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.17( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.11( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.18( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.1f( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.19( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.1e( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[4.1e( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=18/18 les/c/f=19/19/0 sis=27) [1] r=0 lpr=27 pi=[18,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[5.1f( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [1] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 29 pg[3.15( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:29 np0005475493 python3[85931]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  8 05:45:29 np0005475493 python3[86006]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759916729.1539378-33712-206561102441581/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=39cc0911497a7006f64158006f884d8a68db01c1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Oct  8 05:45:29 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Oct  8 05:45:30 np0005475493 python3[86056]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Oct  8 05:45:30 np0005475493 ceph-mon[73572]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  8 05:45:30 np0005475493 ceph-mon[73572]: Cluster is now healthy
Oct  8 05:45:30 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct  8 05:45:30 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  8 05:45:30 np0005475493 podman[86057]: 2025-10-08 09:45:30.468943878 +0000 UTC m=+0.107578564 container create 6eeba7e503e5cf5e724d6fedb3016a017c828f73b132c4baa5acdc2c869b0fa7 (image=quay.io/ceph/ceph:v19, name=bold_yonath, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:45:30 np0005475493 podman[86057]: 2025-10-08 09:45:30.386684726 +0000 UTC m=+0.025319422 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e30 e30: 3 total, 2 up, 3 in
Oct  8 05:45:30 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 2 up, 3 in
Oct  8 05:45:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  8 05:45:30 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  8 05:45:30 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1a( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1b( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.18( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1e( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.19( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1f( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.c( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.d( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.6( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.7( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.4( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.3( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.2( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.5( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.e( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.f( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.9( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.8( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.b( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.a( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.15( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.14( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.17( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.16( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.11( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.10( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.13( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.12( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1d( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1c( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.18( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1b( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.19( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1f( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.d( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1a( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.c( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.6( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.7( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.4( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.3( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.0( empty local-lis/les=29/30 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.5( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.2( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.9( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.8( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.b( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.a( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.f( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.14( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.15( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.17( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.10( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.11( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.16( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.13( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1d( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.12( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 30 pg[6.1c( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [1] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:30 np0005475493 systemd[1]: Started libpod-conmon-6eeba7e503e5cf5e724d6fedb3016a017c828f73b132c4baa5acdc2c869b0fa7.scope.
Oct  8 05:45:30 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:30 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/786e00bb1f81d45bec0f865176f11a69c5c4a917bc95cc37be6bb32e46bb904f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:30 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/786e00bb1f81d45bec0f865176f11a69c5c4a917bc95cc37be6bb32e46bb904f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:30 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/786e00bb1f81d45bec0f865176f11a69c5c4a917bc95cc37be6bb32e46bb904f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:30 np0005475493 podman[86057]: 2025-10-08 09:45:30.554212934 +0000 UTC m=+0.192847650 container init 6eeba7e503e5cf5e724d6fedb3016a017c828f73b132c4baa5acdc2c869b0fa7 (image=quay.io/ceph/ceph:v19, name=bold_yonath, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:45:30 np0005475493 podman[86057]: 2025-10-08 09:45:30.562579611 +0000 UTC m=+0.201214307 container start 6eeba7e503e5cf5e724d6fedb3016a017c828f73b132c4baa5acdc2c869b0fa7 (image=quay.io/ceph/ceph:v19, name=bold_yonath, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:45:30 np0005475493 podman[86057]: 2025-10-08 09:45:30.565831446 +0000 UTC m=+0.204466152 container attach 6eeba7e503e5cf5e724d6fedb3016a017c828f73b132c4baa5acdc2c869b0fa7 (image=quay.io/ceph/ceph:v19, name=bold_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  8 05:45:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Oct  8 05:45:30 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2962484888' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  8 05:45:30 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2962484888' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  8 05:45:30 np0005475493 bold_yonath[86072]: 
Oct  8 05:45:30 np0005475493 bold_yonath[86072]: [global]
Oct  8 05:45:30 np0005475493 bold_yonath[86072]: #011fsid = 787292cc-8154-50c4-9e00-e9be3e817149
Oct  8 05:45:30 np0005475493 bold_yonath[86072]: #011mon_host = 192.168.122.100
Oct  8 05:45:30 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v87: 193 pgs: 124 unknown, 69 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:45:30 np0005475493 systemd[1]: libpod-6eeba7e503e5cf5e724d6fedb3016a017c828f73b132c4baa5acdc2c869b0fa7.scope: Deactivated successfully.
Oct  8 05:45:30 np0005475493 podman[86057]: 2025-10-08 09:45:30.970415137 +0000 UTC m=+0.609049863 container died 6eeba7e503e5cf5e724d6fedb3016a017c828f73b132c4baa5acdc2c869b0fa7 (image=quay.io/ceph/ceph:v19, name=bold_yonath, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Oct  8 05:45:30 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Oct  8 05:45:31 np0005475493 systemd[1]: var-lib-containers-storage-overlay-786e00bb1f81d45bec0f865176f11a69c5c4a917bc95cc37be6bb32e46bb904f-merged.mount: Deactivated successfully.
Oct  8 05:45:31 np0005475493 podman[86057]: 2025-10-08 09:45:31.054658691 +0000 UTC m=+0.693293367 container remove 6eeba7e503e5cf5e724d6fedb3016a017c828f73b132c4baa5acdc2c869b0fa7 (image=quay.io/ceph/ceph:v19, name=bold_yonath, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0)
Oct  8 05:45:31 np0005475493 systemd[1]: libpod-conmon-6eeba7e503e5cf5e724d6fedb3016a017c828f73b132c4baa5acdc2c869b0fa7.scope: Deactivated successfully.
Oct  8 05:45:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:45:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:45:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Oct  8 05:45:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  8 05:45:31 np0005475493 python3[86137]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:31 np0005475493 podman[86163]: 2025-10-08 09:45:31.484058231 +0000 UTC m=+0.039845264 container create 1bc84e706ed50e20bb08412e4fa01fe25148e2fd6b2e203811b5c5b7d7749ba8 (image=quay.io/ceph/ceph:v19, name=pensive_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  8 05:45:31 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/2962484888' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  8 05:45:31 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/2962484888' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  8 05:45:31 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:31 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:31 np0005475493 ceph-mon[73572]: from='osd.2 [v2:192.168.122.102:6800/2890316650,v1:192.168.122.102:6801/2890316650]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  8 05:45:31 np0005475493 ceph-mon[73572]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  8 05:45:31 np0005475493 systemd[1]: Started libpod-conmon-1bc84e706ed50e20bb08412e4fa01fe25148e2fd6b2e203811b5c5b7d7749ba8.scope.
Oct  8 05:45:31 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:31 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/314f9020b44ee769459974a96c34bfb6f89f3181d0dbf7cf44026f2db47ea1e9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:31 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/314f9020b44ee769459974a96c34bfb6f89f3181d0dbf7cf44026f2db47ea1e9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:31 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/314f9020b44ee769459974a96c34bfb6f89f3181d0dbf7cf44026f2db47ea1e9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:31 np0005475493 podman[86163]: 2025-10-08 09:45:31.54405861 +0000 UTC m=+0.099845663 container init 1bc84e706ed50e20bb08412e4fa01fe25148e2fd6b2e203811b5c5b7d7749ba8 (image=quay.io/ceph/ceph:v19, name=pensive_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  8 05:45:31 np0005475493 podman[86163]: 2025-10-08 09:45:31.549163701 +0000 UTC m=+0.104950734 container start 1bc84e706ed50e20bb08412e4fa01fe25148e2fd6b2e203811b5c5b7d7749ba8 (image=quay.io/ceph/ceph:v19, name=pensive_wozniak, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:45:31 np0005475493 podman[86163]: 2025-10-08 09:45:31.55227063 +0000 UTC m=+0.108057673 container attach 1bc84e706ed50e20bb08412e4fa01fe25148e2fd6b2e203811b5c5b7d7749ba8 (image=quay.io/ceph/ceph:v19, name=pensive_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:45:31 np0005475493 podman[86163]: 2025-10-08 09:45:31.469210445 +0000 UTC m=+0.024997498 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:31 np0005475493 ceph-mgr[73869]: [progress INFO root] Writing back 11 completed events
Oct  8 05:45:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  8 05:45:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:32 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Oct  8 05:45:32 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2319921068' entity='client.admin' 
Oct  8 05:45:32 np0005475493 pensive_wozniak[86178]: set ssl_option
Oct  8 05:45:32 np0005475493 systemd[1]: libpod-1bc84e706ed50e20bb08412e4fa01fe25148e2fd6b2e203811b5c5b7d7749ba8.scope: Deactivated successfully.
Oct  8 05:45:32 np0005475493 podman[86163]: 2025-10-08 09:45:32.051253086 +0000 UTC m=+0.607040139 container died 1bc84e706ed50e20bb08412e4fa01fe25148e2fd6b2e203811b5c5b7d7749ba8 (image=quay.io/ceph/ceph:v19, name=pensive_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:45:32 np0005475493 systemd[1]: var-lib-containers-storage-overlay-314f9020b44ee769459974a96c34bfb6f89f3181d0dbf7cf44026f2db47ea1e9-merged.mount: Deactivated successfully.
Oct  8 05:45:32 np0005475493 podman[86163]: 2025-10-08 09:45:32.098509366 +0000 UTC m=+0.654296409 container remove 1bc84e706ed50e20bb08412e4fa01fe25148e2fd6b2e203811b5c5b7d7749ba8 (image=quay.io/ceph/ceph:v19, name=pensive_wozniak, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True)
Oct  8 05:45:32 np0005475493 systemd[1]: libpod-conmon-1bc84e706ed50e20bb08412e4fa01fe25148e2fd6b2e203811b5c5b7d7749ba8.scope: Deactivated successfully.
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e31 e31: 3 total, 2 up, 3 in
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 2 up, 3 in
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  8 05:45:32 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e31 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Oct  8 05:45:32 np0005475493 python3[86322]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/2319921068' entity='client.admin' 
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: from='osd.2 [v2:192.168.122.102:6800/2890316650,v1:192.168.122.102:6801/2890316650]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:32 np0005475493 podman[86323]: 2025-10-08 09:45:32.571215932 +0000 UTC m=+0.063339199 container create 3a71b939d02f6e3384222f16d1bd39cb82cfa21def95589c2ae465e977a94ff2 (image=quay.io/ceph/ceph:v19, name=recursing_knuth, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:45:32 np0005475493 systemd[1]: Started libpod-conmon-3a71b939d02f6e3384222f16d1bd39cb82cfa21def95589c2ae465e977a94ff2.scope.
Oct  8 05:45:32 np0005475493 podman[86323]: 2025-10-08 09:45:32.5480194 +0000 UTC m=+0.040142677 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:32 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:32 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470b765adf329324f93b33a71d3c832e375d3c3f6be104e7473d0fd439b61792/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:32 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470b765adf329324f93b33a71d3c832e375d3c3f6be104e7473d0fd439b61792/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:32 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470b765adf329324f93b33a71d3c832e375d3c3f6be104e7473d0fd439b61792/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:32 np0005475493 podman[86323]: 2025-10-08 09:45:32.672839807 +0000 UTC m=+0.164963134 container init 3a71b939d02f6e3384222f16d1bd39cb82cfa21def95589c2ae465e977a94ff2 (image=quay.io/ceph/ceph:v19, name=recursing_knuth, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  8 05:45:32 np0005475493 podman[86323]: 2025-10-08 09:45:32.680827749 +0000 UTC m=+0.172951016 container start 3a71b939d02f6e3384222f16d1bd39cb82cfa21def95589c2ae465e977a94ff2 (image=quay.io/ceph/ceph:v19, name=recursing_knuth, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:45:32 np0005475493 podman[86323]: 2025-10-08 09:45:32.686332347 +0000 UTC m=+0.178455664 container attach 3a71b939d02f6e3384222f16d1bd39cb82cfa21def95589c2ae465e977a94ff2 (image=quay.io/ceph/ceph:v19, name=recursing_knuth, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v89: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  8 05:45:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Oct  8 05:45:33 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14298 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 05:45:33 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  8 05:45:33 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:33 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Oct  8 05:45:33 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:33 np0005475493 recursing_knuth[86338]: Scheduled rgw.rgw update...
Oct  8 05:45:33 np0005475493 recursing_knuth[86338]: Scheduled ingress.rgw.default update...
Oct  8 05:45:33 np0005475493 systemd[1]: libpod-3a71b939d02f6e3384222f16d1bd39cb82cfa21def95589c2ae465e977a94ff2.scope: Deactivated successfully.
Oct  8 05:45:33 np0005475493 podman[86323]: 2025-10-08 09:45:33.121682973 +0000 UTC m=+0.613806230 container died 3a71b939d02f6e3384222f16d1bd39cb82cfa21def95589c2ae465e977a94ff2 (image=quay.io/ceph/ceph:v19, name=recursing_knuth, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:45:33 np0005475493 systemd[1]: var-lib-containers-storage-overlay-470b765adf329324f93b33a71d3c832e375d3c3f6be104e7473d0fd439b61792-merged.mount: Deactivated successfully.
Oct  8 05:45:33 np0005475493 podman[86323]: 2025-10-08 09:45:33.17342763 +0000 UTC m=+0.665550887 container remove 3a71b939d02f6e3384222f16d1bd39cb82cfa21def95589c2ae465e977a94ff2 (image=quay.io/ceph/ceph:v19, name=recursing_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  8 05:45:33 np0005475493 systemd[1]: libpod-conmon-3a71b939d02f6e3384222f16d1bd39cb82cfa21def95589c2ae465e977a94ff2.scope: Deactivated successfully.
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e32 e32: 3 total, 2 up, 3 in
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 2 up, 3 in
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  8 05:45:33 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.18( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.121037483s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512550354s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.1a( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.178676605s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570198059s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.18( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.120996475s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512550354s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.1a( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.178634644s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570198059s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.19( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.114104271s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.505683899s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.19( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.114104271s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.505683899s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.18( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.114066124s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.505737305s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.18( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.114026070s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.505737305s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.1b( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.174949646s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.566734314s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.1b( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.174949646s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.566734314s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.1d( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.119994164s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.511878967s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.1d( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.119994164s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.511878967s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.1b( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119948387s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.511909485s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.1a( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119895935s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.511878967s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.1b( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119919777s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.511909485s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.1a( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119863510s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.511878967s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.1c( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.119695663s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.511886597s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.1b( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119695663s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.511962891s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.1c( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.119617462s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.511886597s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.1b( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119663239s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.511962891s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.1a( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119583130s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512023926s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.1a( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119583130s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512023926s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.19( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.177409172s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570030212s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.1b( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.119441032s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512107849s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.19( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.177370071s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570030212s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.1e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.177398682s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570114136s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.1b( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.119441032s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512107849s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.1e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.177398682s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570114136s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.1c( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119413376s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512397766s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.1a( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.119321823s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512313843s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.1c( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119057655s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512062073s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.1c( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119380951s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512397766s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.1a( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.119321823s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512313843s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.1c( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119057655s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512062073s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.1d( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119115829s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512184143s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.1d( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119115829s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512184143s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.e( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119060516s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512351990s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.f( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119127274s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512466431s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.e( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119025230s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512351990s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.f( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119091988s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512466431s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.8( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.119064331s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512535095s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.8( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.119064331s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512535095s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.e( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119032860s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512588501s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.e( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.119032860s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512588501s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.9( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.118725777s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512420654s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.3( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118890762s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512626648s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.9( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.118725777s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512420654s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.d( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.176372528s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570159912s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.3( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118890762s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512626648s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.d( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.176337242s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570159912s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.1( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.176275253s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570251465s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.2( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118567467s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512573242s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.1( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.176275253s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570251465s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.3( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.119441986s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513511658s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.2( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118532181s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512573242s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.3( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.119409561s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513511658s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.5( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118524551s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512710571s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.5( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118454933s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512710571s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.4( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118489265s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512802124s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.4( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118489265s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512802124s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.7( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.175937653s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570274353s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.6( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118301392s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512779236s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.7( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118440628s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.512924194s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.6( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118301392s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512779236s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.7( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118406296s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512924194s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.2( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118491173s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513366699s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.1( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118454933s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513374329s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.2( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118491173s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513366699s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.7( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.175724030s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570274353s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.1( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118454933s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513374329s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.0( empty local-lis/les=27/29 n=0 ec=19/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118487358s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513656616s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.3( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.175113678s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570327759s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.0( empty local-lis/les=27/29 n=0 ec=19/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118487358s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513656616s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.3( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.175046921s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570327759s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.2( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.175057411s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570373535s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.1( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118297577s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513610840s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.1( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118257523s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513610840s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.2( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.174987793s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570373535s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.0( empty local-lis/les=25/29 n=0 ec=17/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.118144989s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513679504s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.5( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.174762726s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570335388s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.0( empty local-lis/les=25/29 n=0 ec=17/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.118144989s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513679504s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.5( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.117730141s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513336182s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.5( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.174729347s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570335388s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.5( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.117713928s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513336182s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.d( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.117918015s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513687134s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.a( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.118183136s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514030457s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.d( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.117879868s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513687134s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.a( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.118150711s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514030457s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.d( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.117716789s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513748169s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.d( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.117716789s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513748169s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.174188614s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570335388s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.174156189s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570335388s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.c( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.117549896s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513961792s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.c( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.117510796s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513961792s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.c( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.117620468s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514175415s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.a( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.117230415s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513832092s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.c( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.117584229s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514175415s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.a( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.117197037s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513832092s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.d( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.117197037s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513862610s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.b( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.117210388s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513908386s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.b( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.117210388s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513908386s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.8( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.173627853s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570388794s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.8( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.173590660s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570388794s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.9( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.117014885s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513893127s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.9( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.117014885s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513893127s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.d( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.117106438s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513862610s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.e( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.116837502s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513923645s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.8( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.116839409s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513961792s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.e( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.116837502s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513923645s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.8( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.116839409s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513961792s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.8( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.116585732s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513954163s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.f( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.116562843s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513999939s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.9( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.116542816s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.513999939s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.f( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.116525650s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513999939s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.9( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.116506577s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513999939s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.8( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.116585732s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513954163s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.10( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.116289139s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514091492s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.16( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.116175652s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514053345s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.10( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.116255760s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514091492s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.16( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.116141319s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514053345s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.15( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.172485352s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570465088s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.a( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.172746658s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570419312s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.11( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.116067886s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514076233s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.15( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.172447205s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570465088s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.a( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.171899796s) [0] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570419312s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.11( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.116067886s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514076233s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.15( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.115336418s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514129639s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.15( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.115336418s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514129639s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.17( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.171558380s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570465088s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.14( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.115266800s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514190674s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.15( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.115489006s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514427185s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.14( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.115266800s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514190674s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.15( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.115456581s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514427185s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.13( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.115301132s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514312744s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.17( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.171558380s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570465088s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.13( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.115226746s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514312744s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.13( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.115263939s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514312744s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.13( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.115194321s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514312744s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.14( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.115078926s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514358521s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.14( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.115044594s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514358521s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.15( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.122462273s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.521919250s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.15( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.122462273s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.521919250s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.13( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.114801407s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514404297s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.13( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.114801407s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514404297s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.16( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.114745140s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514450073s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[3.16( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=32 pruub=12.114709854s) [0] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514450073s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.10( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.114684105s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514457703s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.10( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.114649773s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514457703s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.11( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118941307s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.518859863s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.12( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.170584679s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570556641s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.11( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118906975s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.518859863s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.12( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.170584679s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570556641s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.1f( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118884087s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.518920898s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[4.1f( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118884087s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.518920898s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.1c( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.170555115s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 74.570907593s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.1f( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118961334s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.519348145s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[6.1c( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.170555115s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570907593s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.1f( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.118926048s) [0] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.519348145s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.12( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.113756180s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 73.514350891s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[5.12( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=32 pruub=12.113756180s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514350891s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:33 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2890316650; not ready for session (expect reconnect)
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  8 05:45:33 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[2.19( empty local-lis/les=0/0 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.13( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.10( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.b( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.8( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[2.e( empty local-lis/les=0/0 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.9( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.6( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.e( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[2.1( empty local-lis/les=0/0 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[2.4( empty local-lis/les=0/0 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.4( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[2.6( empty local-lis/les=0/0 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.3( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.2( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.1e( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.f( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[2.9( empty local-lis/les=0/0 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.1b( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[2.1e( empty local-lis/les=0/0 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[2.1f( empty local-lis/les=0/0 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 32 pg[7.18( empty local-lis/les=0/0 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: Saving service ingress.rgw.default spec with placement count:2
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  8 05:45:33 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  8 05:45:33 np0005475493 python3[86450]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  8 05:45:33 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Oct  8 05:45:34 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Oct  8 05:45:34 np0005475493 python3[86521]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759916733.399294-33731-3548316112324/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:45:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Oct  8 05:45:34 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2890316650; not ready for session (expect reconnect)
Oct  8 05:45:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  8 05:45:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  8 05:45:34 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  8 05:45:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e33 e33: 3 total, 2 up, 3 in
Oct  8 05:45:34 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 2 up, 3 in
Oct  8 05:45:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  8 05:45:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  8 05:45:34 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  8 05:45:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[2.1e( empty local-lis/les=32/33 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[2.1f( empty local-lis/les=32/33 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.18( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.1b( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.1e( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[2.9( empty local-lis/les=32/33 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.3( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.2( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[2.6( empty local-lis/les=32/33 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.4( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.6( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.e( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[2.4( empty local-lis/les=32/33 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.f( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.8( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[2.1( empty local-lis/les=32/33 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.9( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.b( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[2.e( empty local-lis/les=32/33 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.10( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[7.13( empty local-lis/les=32/33 n=0 ec=29/21 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 33 pg[2.19( empty local-lis/les=32/33 n=0 ec=25/15 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:45:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:45:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Oct  8 05:45:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct  8 05:45:34 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.8M
Oct  8 05:45:34 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.8M
Oct  8 05:45:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct  8 05:45:34 np0005475493 ceph-mgr[73869]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct  8 05:45:34 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct  8 05:45:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:45:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:45:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 05:45:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:45:34 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v92: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:45:34 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct  8 05:45:34 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct  8 05:45:34 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct  8 05:45:34 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct  8 05:45:34 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct  8 05:45:34 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct  8 05:45:34 np0005475493 python3[86571]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:34 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Oct  8 05:45:35 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Oct  8 05:45:35 np0005475493 podman[86595]: 2025-10-08 09:45:35.052655372 +0000 UTC m=+0.044312158 container create f2155d8e19883df6e3d9067cba65d6038908c2bf031c9882301fb2086efefe0a (image=quay.io/ceph/ceph:v19, name=laughing_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  8 05:45:35 np0005475493 systemd[1]: Started libpod-conmon-f2155d8e19883df6e3d9067cba65d6038908c2bf031c9882301fb2086efefe0a.scope.
Oct  8 05:45:35 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6e68b1dc7aa381cccbc7483b1bcbf96fefbeee83db759e580c5ac1eb212adde/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6e68b1dc7aa381cccbc7483b1bcbf96fefbeee83db759e580c5ac1eb212adde/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6e68b1dc7aa381cccbc7483b1bcbf96fefbeee83db759e580c5ac1eb212adde/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:35 np0005475493 podman[86595]: 2025-10-08 09:45:35.123725581 +0000 UTC m=+0.115382397 container init f2155d8e19883df6e3d9067cba65d6038908c2bf031c9882301fb2086efefe0a (image=quay.io/ceph/ceph:v19, name=laughing_pasteur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  8 05:45:35 np0005475493 podman[86595]: 2025-10-08 09:45:35.032642223 +0000 UTC m=+0.024299029 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:35 np0005475493 podman[86595]: 2025-10-08 09:45:35.129310552 +0000 UTC m=+0.120967338 container start f2155d8e19883df6e3d9067cba65d6038908c2bf031c9882301fb2086efefe0a (image=quay.io/ceph/ceph:v19, name=laughing_pasteur, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:45:35 np0005475493 podman[86595]: 2025-10-08 09:45:35.134598041 +0000 UTC m=+0.126254827 container attach f2155d8e19883df6e3d9067cba65d6038908c2bf031c9882301fb2086efefe0a (image=quay.io/ceph/ceph:v19, name=laughing_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:45:35 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2890316650; not ready for session (expect reconnect)
Oct  8 05:45:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  8 05:45:35 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  8 05:45:35 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  8 05:45:35 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14304 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 05:45:35 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Saving service node-exporter spec with placement *
Oct  8 05:45:35 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Oct  8 05:45:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Oct  8 05:45:35 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:45:35 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:45:35 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:35 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Oct  8 05:45:35 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Oct  8 05:45:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Oct  8 05:45:35 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:45:35 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:45:35 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:35 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Oct  8 05:45:35 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Oct  8 05:45:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Oct  8 05:45:35 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:35 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Oct  8 05:45:35 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Oct  8 05:45:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Oct  8 05:45:35 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:35 np0005475493 laughing_pasteur[86660]: Scheduled node-exporter update...
Oct  8 05:45:35 np0005475493 laughing_pasteur[86660]: Scheduled grafana update...
Oct  8 05:45:35 np0005475493 laughing_pasteur[86660]: Scheduled prometheus update...
Oct  8 05:45:35 np0005475493 laughing_pasteur[86660]: Scheduled alertmanager update...
Oct  8 05:45:35 np0005475493 systemd[1]: libpod-f2155d8e19883df6e3d9067cba65d6038908c2bf031c9882301fb2086efefe0a.scope: Deactivated successfully.
Oct  8 05:45:35 np0005475493 podman[86595]: 2025-10-08 09:45:35.626439802 +0000 UTC m=+0.618096608 container died f2155d8e19883df6e3d9067cba65d6038908c2bf031c9882301fb2086efefe0a (image=quay.io/ceph/ceph:v19, name=laughing_pasteur, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  8 05:45:35 np0005475493 systemd[1]: var-lib-containers-storage-overlay-c6e68b1dc7aa381cccbc7483b1bcbf96fefbeee83db759e580c5ac1eb212adde-merged.mount: Deactivated successfully.
Oct  8 05:45:35 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:45:35 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:45:35 np0005475493 podman[86595]: 2025-10-08 09:45:35.676642284 +0000 UTC m=+0.668299070 container remove f2155d8e19883df6e3d9067cba65d6038908c2bf031c9882301fb2086efefe0a (image=quay.io/ceph/ceph:v19, name=laughing_pasteur, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Oct  8 05:45:35 np0005475493 systemd[1]: libpod-conmon-f2155d8e19883df6e3d9067cba65d6038908c2bf031c9882301fb2086efefe0a.scope: Deactivated successfully.
Oct  8 05:45:35 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:35 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:35 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct  8 05:45:35 np0005475493 ceph-mon[73572]: Adjusting osd_memory_target on compute-2 to 127.8M
Oct  8 05:45:35 np0005475493 ceph-mon[73572]: Unable to set osd_memory_target on compute-2 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct  8 05:45:35 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:45:35 np0005475493 ceph-mon[73572]: Updating compute-0:/etc/ceph/ceph.conf
Oct  8 05:45:35 np0005475493 ceph-mon[73572]: Updating compute-1:/etc/ceph/ceph.conf
Oct  8 05:45:35 np0005475493 ceph-mon[73572]: Updating compute-2:/etc/ceph/ceph.conf
Oct  8 05:45:35 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:35 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:35 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:35 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:36 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Oct  8 05:45:36 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:36 np0005475493 python3[87095]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:36 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2890316650; not ready for session (expect reconnect)
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  8 05:45:36 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:45:36 np0005475493 podman[87096]: 2025-10-08 09:45:36.358366219 +0000 UTC m=+0.099547190 container create 8f14f7dc38cb0f06e9a171e6fc1c267bdcc705e935a8834e96871f94534619f1 (image=quay.io/ceph/ceph:v19, name=affectionate_yalow, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:45:36 np0005475493 podman[87096]: 2025-10-08 09:45:36.278311319 +0000 UTC m=+0.019492310 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 05:45:36 np0005475493 systemd[1]: Started libpod-conmon-8f14f7dc38cb0f06e9a171e6fc1c267bdcc705e935a8834e96871f94534619f1.scope.
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:45:36 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:36 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91e13f0d0c7a0ecc60e3eadbee5a8a9b92c571bfba4bd6b57e81f73340cef0c3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:36 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91e13f0d0c7a0ecc60e3eadbee5a8a9b92c571bfba4bd6b57e81f73340cef0c3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:36 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91e13f0d0c7a0ecc60e3eadbee5a8a9b92c571bfba4bd6b57e81f73340cef0c3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:36 np0005475493 podman[87096]: 2025-10-08 09:45:36.44036388 +0000 UTC m=+0.181544871 container init 8f14f7dc38cb0f06e9a171e6fc1c267bdcc705e935a8834e96871f94534619f1 (image=quay.io/ceph/ceph:v19, name=affectionate_yalow, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  8 05:45:36 np0005475493 podman[87096]: 2025-10-08 09:45:36.445102826 +0000 UTC m=+0.186283817 container start 8f14f7dc38cb0f06e9a171e6fc1c267bdcc705e935a8834e96871f94534619f1 (image=quay.io/ceph/ceph:v19, name=affectionate_yalow, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  8 05:45:36 np0005475493 podman[87096]: 2025-10-08 09:45:36.450780542 +0000 UTC m=+0.191961523 container attach 8f14f7dc38cb0f06e9a171e6fc1c267bdcc705e935a8834e96871f94534619f1 (image=quay.io/ceph/ceph:v19, name=affectionate_yalow, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:45:36 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event f1f7cd03-9f1a-4216-9173-a4ef5b56243c (Global Recovery Event) in 10 seconds
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3966809353' entity='client.admin' 
Oct  8 05:45:36 np0005475493 systemd[1]: libpod-8f14f7dc38cb0f06e9a171e6fc1c267bdcc705e935a8834e96871f94534619f1.scope: Deactivated successfully.
Oct  8 05:45:36 np0005475493 podman[87096]: 2025-10-08 09:45:36.831657599 +0000 UTC m=+0.572838620 container died 8f14f7dc38cb0f06e9a171e6fc1c267bdcc705e935a8834e96871f94534619f1 (image=quay.io/ceph/ceph:v19, name=affectionate_yalow, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:45:36 np0005475493 systemd[1]: var-lib-containers-storage-overlay-91e13f0d0c7a0ecc60e3eadbee5a8a9b92c571bfba4bd6b57e81f73340cef0c3-merged.mount: Deactivated successfully.
Oct  8 05:45:36 np0005475493 podman[87096]: 2025-10-08 09:45:36.910125314 +0000 UTC m=+0.651306285 container remove 8f14f7dc38cb0f06e9a171e6fc1c267bdcc705e935a8834e96871f94534619f1 (image=quay.io/ceph/ceph:v19, name=affectionate_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:45:36 np0005475493 systemd[1]: libpod-conmon-8f14f7dc38cb0f06e9a171e6fc1c267bdcc705e935a8834e96871f94534619f1.scope: Deactivated successfully.
Oct  8 05:45:36 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v93: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: Saving service node-exporter spec with placement *
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: Saving service grafana spec with placement compute-0;count:1
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: Saving service prometheus spec with placement compute-0;count:1
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: Saving service alertmanager spec with placement compute-0;count:1
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:45:36 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/3966809353' entity='client.admin' 
Oct  8 05:45:36 np0005475493 podman[87236]: 2025-10-08 09:45:36.990679044 +0000 UTC m=+0.048827445 container create 16015a05b9190f05dc9e49782f22b1651ce473bfa24c767b223d33290cbffa64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Oct  8 05:45:37 np0005475493 systemd[1]: Started libpod-conmon-16015a05b9190f05dc9e49782f22b1651ce473bfa24c767b223d33290cbffa64.scope.
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Oct  8 05:45:37 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:37 np0005475493 podman[87236]: 2025-10-08 09:45:37.054176829 +0000 UTC m=+0.112325230 container init 16015a05b9190f05dc9e49782f22b1651ce473bfa24c767b223d33290cbffa64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_ishizaka, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:45:37 np0005475493 podman[87236]: 2025-10-08 09:45:37.061900849 +0000 UTC m=+0.120049260 container start 16015a05b9190f05dc9e49782f22b1651ce473bfa24c767b223d33290cbffa64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct  8 05:45:37 np0005475493 silly_ishizaka[87252]: 167 167
Oct  8 05:45:37 np0005475493 systemd[1]: libpod-16015a05b9190f05dc9e49782f22b1651ce473bfa24c767b223d33290cbffa64.scope: Deactivated successfully.
Oct  8 05:45:37 np0005475493 podman[87236]: 2025-10-08 09:45:37.067303633 +0000 UTC m=+0.125452074 container attach 16015a05b9190f05dc9e49782f22b1651ce473bfa24c767b223d33290cbffa64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct  8 05:45:37 np0005475493 podman[87236]: 2025-10-08 09:45:37.067581985 +0000 UTC m=+0.125730396 container died 16015a05b9190f05dc9e49782f22b1651ce473bfa24c767b223d33290cbffa64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:45:37 np0005475493 podman[87236]: 2025-10-08 09:45:36.97322528 +0000 UTC m=+0.031373701 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:45:37 np0005475493 systemd[1]: var-lib-containers-storage-overlay-8b7c36b75f158b1a4faa85ac697b6014b9c43ca447a6975eb6a80c12e5df69c8-merged.mount: Deactivated successfully.
Oct  8 05:45:37 np0005475493 podman[87236]: 2025-10-08 09:45:37.117685773 +0000 UTC m=+0.175834184 container remove 16015a05b9190f05dc9e49782f22b1651ce473bfa24c767b223d33290cbffa64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_ishizaka, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:45:37 np0005475493 systemd[1]: libpod-conmon-16015a05b9190f05dc9e49782f22b1651ce473bfa24c767b223d33290cbffa64.scope: Deactivated successfully.
Oct  8 05:45:37 np0005475493 podman[87303]: 2025-10-08 09:45:37.288664564 +0000 UTC m=+0.045899005 container create 711a3e5c608c1c898034f4ee8f55f42c27ea17f75abdb735b420c7389feffaa5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:45:37 np0005475493 python3[87297]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:37 np0005475493 ceph-mgr[73869]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2890316650; not ready for session (expect reconnect)
Oct  8 05:45:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  8 05:45:37 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  8 05:45:37 np0005475493 ceph-mgr[73869]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  8 05:45:37 np0005475493 systemd[1]: Started libpod-conmon-711a3e5c608c1c898034f4ee8f55f42c27ea17f75abdb735b420c7389feffaa5.scope.
Oct  8 05:45:37 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:37 np0005475493 podman[87317]: 2025-10-08 09:45:37.360595637 +0000 UTC m=+0.053269820 container create 461530d8334eae3f2771cdb0c926da225bc4dc8606d45fb1ed858397328e35e4 (image=quay.io/ceph/ceph:v19, name=lucid_galois, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:45:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a553839c831bcba7d82d94566b1abeadce4c4234df9bae55e7eaacfb1a260d44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a553839c831bcba7d82d94566b1abeadce4c4234df9bae55e7eaacfb1a260d44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a553839c831bcba7d82d94566b1abeadce4c4234df9bae55e7eaacfb1a260d44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a553839c831bcba7d82d94566b1abeadce4c4234df9bae55e7eaacfb1a260d44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a553839c831bcba7d82d94566b1abeadce4c4234df9bae55e7eaacfb1a260d44/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:37 np0005475493 podman[87303]: 2025-10-08 09:45:37.265210271 +0000 UTC m=+0.022444722 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:45:37 np0005475493 podman[87303]: 2025-10-08 09:45:37.376706825 +0000 UTC m=+0.133941266 container init 711a3e5c608c1c898034f4ee8f55f42c27ea17f75abdb735b420c7389feffaa5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_easley, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:45:37 np0005475493 podman[87303]: 2025-10-08 09:45:37.383261737 +0000 UTC m=+0.140496168 container start 711a3e5c608c1c898034f4ee8f55f42c27ea17f75abdb735b420c7389feffaa5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_easley, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  8 05:45:37 np0005475493 podman[87303]: 2025-10-08 09:45:37.387365108 +0000 UTC m=+0.144599539 container attach 711a3e5c608c1c898034f4ee8f55f42c27ea17f75abdb735b420c7389feffaa5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_easley, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  8 05:45:37 np0005475493 systemd[1]: Started libpod-conmon-461530d8334eae3f2771cdb0c926da225bc4dc8606d45fb1ed858397328e35e4.scope.
Oct  8 05:45:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Oct  8 05:45:37 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555a56cfe474a6ad3cf6eb32850f876051874dc60dd80aa552a7176beb41ec95/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555a56cfe474a6ad3cf6eb32850f876051874dc60dd80aa552a7176beb41ec95/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555a56cfe474a6ad3cf6eb32850f876051874dc60dd80aa552a7176beb41ec95/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Oct  8 05:45:37 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/2890316650,v1:192.168.122.102:6801/2890316650] boot
Oct  8 05:45:37 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Oct  8 05:45:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  8 05:45:37 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  8 05:45:37 np0005475493 podman[87317]: 2025-10-08 09:45:37.429164282 +0000 UTC m=+0.121838495 container init 461530d8334eae3f2771cdb0c926da225bc4dc8606d45fb1ed858397328e35e4 (image=quay.io/ceph/ceph:v19, name=lucid_galois, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  8 05:45:37 np0005475493 podman[87317]: 2025-10-08 09:45:37.338105664 +0000 UTC m=+0.030779867 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:37 np0005475493 podman[87317]: 2025-10-08 09:45:37.435098517 +0000 UTC m=+0.127772700 container start 461530d8334eae3f2771cdb0c926da225bc4dc8606d45fb1ed858397328e35e4 (image=quay.io/ceph/ceph:v19, name=lucid_galois, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:45:37 np0005475493 podman[87317]: 2025-10-08 09:45:37.440323975 +0000 UTC m=+0.132998158 container attach 461530d8334eae3f2771cdb0c926da225bc4dc8606d45fb1ed858397328e35e4 (image=quay.io/ceph/ceph:v19, name=lucid_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  8 05:45:37 np0005475493 elated_easley[87333]: --> passed data devices: 0 physical, 1 LVM
Oct  8 05:45:37 np0005475493 elated_easley[87333]: --> All data devices are unavailable
Oct  8 05:45:37 np0005475493 systemd[1]: libpod-711a3e5c608c1c898034f4ee8f55f42c27ea17f75abdb735b420c7389feffaa5.scope: Deactivated successfully.
Oct  8 05:45:37 np0005475493 podman[87303]: 2025-10-08 09:45:37.719231522 +0000 UTC m=+0.476465973 container died 711a3e5c608c1c898034f4ee8f55f42c27ea17f75abdb735b420c7389feffaa5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:45:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Oct  8 05:45:37 np0005475493 systemd[1]: var-lib-containers-storage-overlay-a553839c831bcba7d82d94566b1abeadce4c4234df9bae55e7eaacfb1a260d44-merged.mount: Deactivated successfully.
Oct  8 05:45:37 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1514290386' entity='client.admin' 
Oct  8 05:45:37 np0005475493 podman[87303]: 2025-10-08 09:45:37.788779877 +0000 UTC m=+0.546014308 container remove 711a3e5c608c1c898034f4ee8f55f42c27ea17f75abdb735b420c7389feffaa5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_easley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:45:37 np0005475493 systemd[1]: libpod-conmon-711a3e5c608c1c898034f4ee8f55f42c27ea17f75abdb735b420c7389feffaa5.scope: Deactivated successfully.
Oct  8 05:45:37 np0005475493 systemd[1]: libpod-461530d8334eae3f2771cdb0c926da225bc4dc8606d45fb1ed858397328e35e4.scope: Deactivated successfully.
Oct  8 05:45:37 np0005475493 podman[87317]: 2025-10-08 09:45:37.804335902 +0000 UTC m=+0.497010115 container died 461530d8334eae3f2771cdb0c926da225bc4dc8606d45fb1ed858397328e35e4 (image=quay.io/ceph/ceph:v19, name=lucid_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:45:37 np0005475493 systemd[1]: var-lib-containers-storage-overlay-555a56cfe474a6ad3cf6eb32850f876051874dc60dd80aa552a7176beb41ec95-merged.mount: Deactivated successfully.
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.19( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.585172653s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.505683899s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.19( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.585124493s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.505683899s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.1d( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.591294765s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.511878967s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[6.1b( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=8.646142960s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.566734314s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.1d( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.591255665s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.511878967s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[6.1b( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=8.646098137s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.566734314s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.1a( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591252327s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512023926s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.1a( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591234684s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512023926s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.1b( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.591255665s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512107849s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.1b( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.591235161s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512107849s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.1c( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590954781s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512062073s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.1c( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590933800s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512062073s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[6.1e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=8.648948669s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570114136s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[6.1e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=8.648931503s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570114136s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.1d( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590936184s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512184143s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.1a( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.591032028s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512313843s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.1d( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590922356s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512184143s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.1a( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.591004372s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512313843s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.9( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.591074467s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512420654s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.9( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.591054916s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512420654s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.e( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591136456s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512588501s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.8( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.591069221s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512535095s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.e( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591107368s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512588501s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.8( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.591048717s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512535095s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[6.1( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=8.648645401s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570251465s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[6.1( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=8.648628235s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570251465s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.4( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591042519s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512802124s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.4( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591028214s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512802124s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.6( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590711117s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512779236s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.6( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590661526s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512779236s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.2( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591164112s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513366699s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.1( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591093540s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513374329s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.2( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591070652s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513366699s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.1( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591074944s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513374329s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.0( empty local-lis/les=27/29 n=0 ec=19/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591279507s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513656616s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.0( empty local-lis/les=27/29 n=0 ec=19/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591251850s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513656616s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.0( empty local-lis/les=25/29 n=0 ec=17/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.591155529s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513679504s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.0( empty local-lis/les=25/29 n=0 ec=17/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.591136932s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513679504s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.d( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.591093063s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513748169s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.3( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590025902s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512626648s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.3( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.589921951s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.512626648s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.b( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590962410s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513908386s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.b( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590939522s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513908386s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.9( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590873241s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513893127s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.9( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590769291s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513893127s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.8( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590764523s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513961792s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.8( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590742588s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513961792s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.8( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590628147s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513954163s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.11( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.590536594s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514076233s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.8( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590607643s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513954163s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.d( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590866566s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513748169s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.11( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.590515614s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514076233s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[6.17( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=8.646706581s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570465088s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.e( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.590148926s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513923645s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[6.17( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=8.646681786s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570465088s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.e( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.590128422s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.513923645s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.12( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590230942s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514350891s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.12( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.590206623s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514350891s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.15( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.589753628s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514129639s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.14( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.589865685s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514190674s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.13( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.589458466s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514404297s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.15( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.589147091s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514129639s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[5.13( empty local-lis/les=27/29 n=0 ec=27/19 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.589407444s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514404297s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.15( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.596970558s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.521919250s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[3.15( empty local-lis/les=25/29 n=0 ec=25/17 lis/c=25/25 les/c/f=29/29/0 sis=34 pruub=7.596794605s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.521919250s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.14( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.589154243s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.514190674s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.1f( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.592108727s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.518920898s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[4.1f( empty local-lis/les=27/29 n=0 ec=27/18 lis/c=27/27 les/c/f=29/29/0 sis=34 pruub=7.592087269s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.518920898s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[6.1c( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=8.644055367s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570907593s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[6.1c( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=8.644032478s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570907593s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[6.12( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=8.643602371s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570556641s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:45:37 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 34 pg[6.12( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=8.643568039s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.570556641s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:45:37 np0005475493 podman[87317]: 2025-10-08 09:45:37.862700433 +0000 UTC m=+0.555374616 container remove 461530d8334eae3f2771cdb0c926da225bc4dc8606d45fb1ed858397328e35e4 (image=quay.io/ceph/ceph:v19, name=lucid_galois, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:45:37 np0005475493 systemd[1]: libpod-conmon-461530d8334eae3f2771cdb0c926da225bc4dc8606d45fb1ed858397328e35e4.scope: Deactivated successfully.
Oct  8 05:45:37 np0005475493 ceph-mon[73572]: OSD bench result of 8207.449951 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  8 05:45:37 np0005475493 ceph-mon[73572]: osd.2 [v2:192.168.122.102:6800/2890316650,v1:192.168.122.102:6801/2890316650] boot
Oct  8 05:45:37 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/1514290386' entity='client.admin' 
Oct  8 05:45:38 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.c deep-scrub starts
Oct  8 05:45:38 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.c deep-scrub ok
Oct  8 05:45:38 np0005475493 python3[87478]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:38 np0005475493 podman[87503]: 2025-10-08 09:45:38.265424506 +0000 UTC m=+0.053215048 container create 0ffb8f705043d0dd55ba3a7df6130b90d89a6a8342a9f5e994500908f0bed1db (image=quay.io/ceph/ceph:v19, name=amazing_swirles, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  8 05:45:38 np0005475493 systemd[1]: Started libpod-conmon-0ffb8f705043d0dd55ba3a7df6130b90d89a6a8342a9f5e994500908f0bed1db.scope.
Oct  8 05:45:38 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:38 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/918541905b2588f9fec28fd1fa033415f6f06258d72f80daf7d8c8dbf7c5bad3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:38 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/918541905b2588f9fec28fd1fa033415f6f06258d72f80daf7d8c8dbf7c5bad3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:38 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/918541905b2588f9fec28fd1fa033415f6f06258d72f80daf7d8c8dbf7c5bad3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:38 np0005475493 podman[87503]: 2025-10-08 09:45:38.235668863 +0000 UTC m=+0.023459425 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:45:38 np0005475493 podman[87503]: 2025-10-08 09:45:38.338679175 +0000 UTC m=+0.126469727 container init 0ffb8f705043d0dd55ba3a7df6130b90d89a6a8342a9f5e994500908f0bed1db (image=quay.io/ceph/ceph:v19, name=amazing_swirles, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:45:38 np0005475493 podman[87503]: 2025-10-08 09:45:38.344696364 +0000 UTC m=+0.132486906 container start 0ffb8f705043d0dd55ba3a7df6130b90d89a6a8342a9f5e994500908f0bed1db (image=quay.io/ceph/ceph:v19, name=amazing_swirles, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  8 05:45:38 np0005475493 podman[87503]: 2025-10-08 09:45:38.348821845 +0000 UTC m=+0.136612377 container attach 0ffb8f705043d0dd55ba3a7df6130b90d89a6a8342a9f5e994500908f0bed1db (image=quay.io/ceph/ceph:v19, name=amazing_swirles, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  8 05:45:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Oct  8 05:45:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Oct  8 05:45:38 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Oct  8 05:45:38 np0005475493 podman[87538]: 2025-10-08 09:45:38.443743912 +0000 UTC m=+0.035368618 container create cfce3773f0c7bf23b5beb147d3dfd18bdccd09704f9882c5c7ae796b70395371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:45:38 np0005475493 systemd[1]: Started libpod-conmon-cfce3773f0c7bf23b5beb147d3dfd18bdccd09704f9882c5c7ae796b70395371.scope.
Oct  8 05:45:38 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:38 np0005475493 podman[87538]: 2025-10-08 09:45:38.494847112 +0000 UTC m=+0.086471848 container init cfce3773f0c7bf23b5beb147d3dfd18bdccd09704f9882c5c7ae796b70395371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_chebyshev, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct  8 05:45:38 np0005475493 podman[87538]: 2025-10-08 09:45:38.500324089 +0000 UTC m=+0.091948795 container start cfce3773f0c7bf23b5beb147d3dfd18bdccd09704f9882c5c7ae796b70395371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:45:38 np0005475493 festive_chebyshev[87573]: 167 167
Oct  8 05:45:38 np0005475493 systemd[1]: libpod-cfce3773f0c7bf23b5beb147d3dfd18bdccd09704f9882c5c7ae796b70395371.scope: Deactivated successfully.
Oct  8 05:45:38 np0005475493 conmon[87573]: conmon cfce3773f0c7bf23b5be <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cfce3773f0c7bf23b5beb147d3dfd18bdccd09704f9882c5c7ae796b70395371.scope/container/memory.events
Oct  8 05:45:38 np0005475493 podman[87538]: 2025-10-08 09:45:38.508885734 +0000 UTC m=+0.100510470 container attach cfce3773f0c7bf23b5beb147d3dfd18bdccd09704f9882c5c7ae796b70395371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:45:38 np0005475493 podman[87538]: 2025-10-08 09:45:38.509399286 +0000 UTC m=+0.101024012 container died cfce3773f0c7bf23b5beb147d3dfd18bdccd09704f9882c5c7ae796b70395371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_chebyshev, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  8 05:45:38 np0005475493 podman[87538]: 2025-10-08 09:45:38.429286182 +0000 UTC m=+0.020910908 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:45:38 np0005475493 systemd[1]: var-lib-containers-storage-overlay-5453c8781f517bf011d9a881940494728f830248255f1034bd31f67b1f4e325c-merged.mount: Deactivated successfully.
Oct  8 05:45:38 np0005475493 podman[87538]: 2025-10-08 09:45:38.569688692 +0000 UTC m=+0.161313408 container remove cfce3773f0c7bf23b5beb147d3dfd18bdccd09704f9882c5c7ae796b70395371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  8 05:45:38 np0005475493 systemd[1]: libpod-conmon-cfce3773f0c7bf23b5beb147d3dfd18bdccd09704f9882c5c7ae796b70395371.scope: Deactivated successfully.
Oct  8 05:45:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Oct  8 05:45:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2213379190' entity='client.admin' 
Oct  8 05:45:38 np0005475493 podman[87595]: 2025-10-08 09:45:38.762629313 +0000 UTC m=+0.077296019 container create fd6f21b30f82e7c68d858c7b0cc8cf6537d415905353a02b29b8cbc1b257557a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_napier, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:45:38 np0005475493 systemd[1]: libpod-0ffb8f705043d0dd55ba3a7df6130b90d89a6a8342a9f5e994500908f0bed1db.scope: Deactivated successfully.
Oct  8 05:45:38 np0005475493 podman[87503]: 2025-10-08 09:45:38.774673458 +0000 UTC m=+0.562463990 container died 0ffb8f705043d0dd55ba3a7df6130b90d89a6a8342a9f5e994500908f0bed1db (image=quay.io/ceph/ceph:v19, name=amazing_swirles, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:45:38 np0005475493 systemd[1]: Started libpod-conmon-fd6f21b30f82e7c68d858c7b0cc8cf6537d415905353a02b29b8cbc1b257557a.scope.
Oct  8 05:45:38 np0005475493 podman[87595]: 2025-10-08 09:45:38.704530128 +0000 UTC m=+0.019196854 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:45:38 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:38 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66619cffad215eae24821cedc028663bcfc8f86a26df3f3f6d2d52f19b666281/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:38 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66619cffad215eae24821cedc028663bcfc8f86a26df3f3f6d2d52f19b666281/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:38 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66619cffad215eae24821cedc028663bcfc8f86a26df3f3f6d2d52f19b666281/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:38 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66619cffad215eae24821cedc028663bcfc8f86a26df3f3f6d2d52f19b666281/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:38 np0005475493 podman[87595]: 2025-10-08 09:45:38.881005403 +0000 UTC m=+0.195672119 container init fd6f21b30f82e7c68d858c7b0cc8cf6537d415905353a02b29b8cbc1b257557a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_napier, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:45:38 np0005475493 podman[87595]: 2025-10-08 09:45:38.888895095 +0000 UTC m=+0.203561801 container start fd6f21b30f82e7c68d858c7b0cc8cf6537d415905353a02b29b8cbc1b257557a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  8 05:45:38 np0005475493 podman[87595]: 2025-10-08 09:45:38.936079192 +0000 UTC m=+0.250745908 container attach fd6f21b30f82e7c68d858c7b0cc8cf6537d415905353a02b29b8cbc1b257557a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:45:38 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v96: 193 pgs: 57 peering, 136 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:45:38 np0005475493 systemd[1]: var-lib-containers-storage-overlay-918541905b2588f9fec28fd1fa033415f6f06258d72f80daf7d8c8dbf7c5bad3-merged.mount: Deactivated successfully.
Oct  8 05:45:38 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.f scrub starts
Oct  8 05:45:39 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.f scrub ok
Oct  8 05:45:39 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/2213379190' entity='client.admin' 
Oct  8 05:45:39 np0005475493 podman[87503]: 2025-10-08 09:45:39.113663243 +0000 UTC m=+0.901453805 container remove 0ffb8f705043d0dd55ba3a7df6130b90d89a6a8342a9f5e994500908f0bed1db (image=quay.io/ceph/ceph:v19, name=amazing_swirles, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:45:39 np0005475493 systemd[1]: libpod-conmon-0ffb8f705043d0dd55ba3a7df6130b90d89a6a8342a9f5e994500908f0bed1db.scope: Deactivated successfully.
Oct  8 05:45:39 np0005475493 frosty_napier[87621]: {
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:    "1": [
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:        {
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:            "devices": [
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:                "/dev/loop3"
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:            ],
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:            "lv_name": "ceph_lv0",
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:            "lv_size": "21470642176",
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:            "name": "ceph_lv0",
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:            "tags": {
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:                "ceph.cephx_lockbox_secret": "",
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:                "ceph.cluster_name": "ceph",
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:                "ceph.crush_device_class": "",
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:                "ceph.encrypted": "0",
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:                "ceph.osd_id": "1",
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:                "ceph.type": "block",
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:                "ceph.vdo": "0",
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:                "ceph.with_tpm": "0"
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:            },
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:            "type": "block",
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:            "vg_name": "ceph_vg0"
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:        }
Oct  8 05:45:39 np0005475493 frosty_napier[87621]:    ]
Oct  8 05:45:39 np0005475493 frosty_napier[87621]: }
Oct  8 05:45:39 np0005475493 systemd[1]: libpod-fd6f21b30f82e7c68d858c7b0cc8cf6537d415905353a02b29b8cbc1b257557a.scope: Deactivated successfully.
Oct  8 05:45:39 np0005475493 podman[87595]: 2025-10-08 09:45:39.258390885 +0000 UTC m=+0.573057611 container died fd6f21b30f82e7c68d858c7b0cc8cf6537d415905353a02b29b8cbc1b257557a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:45:39 np0005475493 systemd[1]: var-lib-containers-storage-overlay-66619cffad215eae24821cedc028663bcfc8f86a26df3f3f6d2d52f19b666281-merged.mount: Deactivated successfully.
Oct  8 05:45:39 np0005475493 podman[87595]: 2025-10-08 09:45:39.342349816 +0000 UTC m=+0.657016522 container remove fd6f21b30f82e7c68d858c7b0cc8cf6537d415905353a02b29b8cbc1b257557a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  8 05:45:39 np0005475493 systemd[1]: libpod-conmon-fd6f21b30f82e7c68d858c7b0cc8cf6537d415905353a02b29b8cbc1b257557a.scope: Deactivated successfully.
Oct  8 05:45:39 np0005475493 python3[87723]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:39 np0005475493 podman[87776]: 2025-10-08 09:45:39.901138611 +0000 UTC m=+0.054987967 container create 25d005aba565c0018a0feceadb2ac50b47e1b3ee4611f5dde4e11c9f3b967751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct  8 05:45:39 np0005475493 systemd[1]: Started libpod-conmon-25d005aba565c0018a0feceadb2ac50b47e1b3ee4611f5dde4e11c9f3b967751.scope.
Oct  8 05:45:39 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:39 np0005475493 podman[87776]: 2025-10-08 09:45:39.866393782 +0000 UTC m=+0.020243168 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:45:39 np0005475493 podman[87776]: 2025-10-08 09:45:39.997058534 +0000 UTC m=+0.150907930 container init 25d005aba565c0018a0feceadb2ac50b47e1b3ee4611f5dde4e11c9f3b967751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_keldysh, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  8 05:45:40 np0005475493 podman[87776]: 2025-10-08 09:45:40.003884742 +0000 UTC m=+0.157734108 container start 25d005aba565c0018a0feceadb2ac50b47e1b3ee4611f5dde4e11c9f3b967751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_keldysh, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:45:40 np0005475493 admiring_keldysh[87792]: 167 167
Oct  8 05:45:40 np0005475493 systemd[1]: libpod-25d005aba565c0018a0feceadb2ac50b47e1b3ee4611f5dde4e11c9f3b967751.scope: Deactivated successfully.
Oct  8 05:45:40 np0005475493 podman[87776]: 2025-10-08 09:45:40.026811055 +0000 UTC m=+0.180660431 container attach 25d005aba565c0018a0feceadb2ac50b47e1b3ee4611f5dde4e11c9f3b967751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_keldysh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:45:40 np0005475493 podman[87776]: 2025-10-08 09:45:40.027824507 +0000 UTC m=+0.181673863 container died 25d005aba565c0018a0feceadb2ac50b47e1b3ee4611f5dde4e11c9f3b967751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  8 05:45:40 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.4 deep-scrub starts
Oct  8 05:45:40 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.4 deep-scrub ok
Oct  8 05:45:40 np0005475493 systemd[1]: var-lib-containers-storage-overlay-7f93151b2c2b7c8818db2ec9a0e6a5f7a2c11eca44f7b99ddea79680b2bc6436-merged.mount: Deactivated successfully.
Oct  8 05:45:40 np0005475493 podman[87776]: 2025-10-08 09:45:40.132610373 +0000 UTC m=+0.286459719 container remove 25d005aba565c0018a0feceadb2ac50b47e1b3ee4611f5dde4e11c9f3b967751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:45:40 np0005475493 systemd[1]: libpod-conmon-25d005aba565c0018a0feceadb2ac50b47e1b3ee4611f5dde4e11c9f3b967751.scope: Deactivated successfully.
Oct  8 05:45:40 np0005475493 python3[87835]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.ixicfj/server_addr 192.168.122.100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:40 np0005475493 podman[87843]: 2025-10-08 09:45:40.290897578 +0000 UTC m=+0.036940610 container create 36c83e3c93384f582896714b60cd9638a1f4af38bbf8aa48be133d29b0cb4579 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:45:40 np0005475493 podman[87844]: 2025-10-08 09:45:40.362783444 +0000 UTC m=+0.108996201 container create d4da444ac1ba4d0bc1bda31b4166ebb3e30236af35c0f712776d258d9537756e (image=quay.io/ceph/ceph:v19, name=happy_elion, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  8 05:45:40 np0005475493 podman[87843]: 2025-10-08 09:45:40.273814782 +0000 UTC m=+0.019857844 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:45:40 np0005475493 podman[87844]: 2025-10-08 09:45:40.277590473 +0000 UTC m=+0.023803330 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:40 np0005475493 systemd[1]: Started libpod-conmon-36c83e3c93384f582896714b60cd9638a1f4af38bbf8aa48be133d29b0cb4579.scope.
Oct  8 05:45:40 np0005475493 systemd[1]: Started libpod-conmon-d4da444ac1ba4d0bc1bda31b4166ebb3e30236af35c0f712776d258d9537756e.scope.
Oct  8 05:45:40 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:40 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:40 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b583f1fd72d0aa9a04bd2f21117db8d870025b6088e30845027bc4c14031ddec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:40 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb4212c556d0d60c9f05cf637f2b3051141f9b2ece24676ab2bbfab8f090342/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:40 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb4212c556d0d60c9f05cf637f2b3051141f9b2ece24676ab2bbfab8f090342/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:40 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb4212c556d0d60c9f05cf637f2b3051141f9b2ece24676ab2bbfab8f090342/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:40 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b583f1fd72d0aa9a04bd2f21117db8d870025b6088e30845027bc4c14031ddec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:40 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b583f1fd72d0aa9a04bd2f21117db8d870025b6088e30845027bc4c14031ddec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:40 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b583f1fd72d0aa9a04bd2f21117db8d870025b6088e30845027bc4c14031ddec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:40 np0005475493 podman[87844]: 2025-10-08 09:45:40.537513703 +0000 UTC m=+0.283726530 container init d4da444ac1ba4d0bc1bda31b4166ebb3e30236af35c0f712776d258d9537756e (image=quay.io/ceph/ceph:v19, name=happy_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct  8 05:45:40 np0005475493 podman[87844]: 2025-10-08 09:45:40.549180376 +0000 UTC m=+0.295393133 container start d4da444ac1ba4d0bc1bda31b4166ebb3e30236af35c0f712776d258d9537756e (image=quay.io/ceph/ceph:v19, name=happy_elion, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct  8 05:45:40 np0005475493 podman[87843]: 2025-10-08 09:45:40.592361385 +0000 UTC m=+0.338404507 container init 36c83e3c93384f582896714b60cd9638a1f4af38bbf8aa48be133d29b0cb4579 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct  8 05:45:40 np0005475493 podman[87843]: 2025-10-08 09:45:40.602250571 +0000 UTC m=+0.348293653 container start 36c83e3c93384f582896714b60cd9638a1f4af38bbf8aa48be133d29b0cb4579 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_elbakyan, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:45:40 np0005475493 podman[87843]: 2025-10-08 09:45:40.640208813 +0000 UTC m=+0.386251955 container attach 36c83e3c93384f582896714b60cd9638a1f4af38bbf8aa48be133d29b0cb4579 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_elbakyan, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:45:40 np0005475493 podman[87844]: 2025-10-08 09:45:40.689942311 +0000 UTC m=+0.436155148 container attach d4da444ac1ba4d0bc1bda31b4166ebb3e30236af35c0f712776d258d9537756e (image=quay.io/ceph/ceph:v19, name=happy_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Oct  8 05:45:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.ixicfj/server_addr}] v 0)
Oct  8 05:45:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v97: 193 pgs: 57 peering, 136 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:45:41 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/8501056' entity='client.admin' 
Oct  8 05:45:41 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Oct  8 05:45:41 np0005475493 systemd[1]: libpod-d4da444ac1ba4d0bc1bda31b4166ebb3e30236af35c0f712776d258d9537756e.scope: Deactivated successfully.
Oct  8 05:45:41 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Oct  8 05:45:41 np0005475493 podman[87945]: 2025-10-08 09:45:41.065904398 +0000 UTC m=+0.025379102 container died d4da444ac1ba4d0bc1bda31b4166ebb3e30236af35c0f712776d258d9537756e (image=quay.io/ceph/ceph:v19, name=happy_elion, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  8 05:45:41 np0005475493 lvm[87988]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 05:45:41 np0005475493 lvm[87988]: VG ceph_vg0 finished
Oct  8 05:45:41 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/8501056' entity='client.admin' 
Oct  8 05:45:41 np0005475493 systemd[1]: var-lib-containers-storage-overlay-0fb4212c556d0d60c9f05cf637f2b3051141f9b2ece24676ab2bbfab8f090342-merged.mount: Deactivated successfully.
Oct  8 05:45:41 np0005475493 podman[87945]: 2025-10-08 09:45:41.352771799 +0000 UTC m=+0.312246503 container remove d4da444ac1ba4d0bc1bda31b4166ebb3e30236af35c0f712776d258d9537756e (image=quay.io/ceph/ceph:v19, name=happy_elion, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  8 05:45:41 np0005475493 lucid_elbakyan[87875]: {}
Oct  8 05:45:41 np0005475493 systemd[1]: libpod-conmon-d4da444ac1ba4d0bc1bda31b4166ebb3e30236af35c0f712776d258d9537756e.scope: Deactivated successfully.
Oct  8 05:45:41 np0005475493 systemd[1]: libpod-36c83e3c93384f582896714b60cd9638a1f4af38bbf8aa48be133d29b0cb4579.scope: Deactivated successfully.
Oct  8 05:45:41 np0005475493 systemd[1]: libpod-36c83e3c93384f582896714b60cd9638a1f4af38bbf8aa48be133d29b0cb4579.scope: Consumed 1.063s CPU time.
Oct  8 05:45:41 np0005475493 podman[87990]: 2025-10-08 09:45:41.41607596 +0000 UTC m=+0.023419598 container died 36c83e3c93384f582896714b60cd9638a1f4af38bbf8aa48be133d29b0cb4579 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_elbakyan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:45:41 np0005475493 systemd[1]: var-lib-containers-storage-overlay-b583f1fd72d0aa9a04bd2f21117db8d870025b6088e30845027bc4c14031ddec-merged.mount: Deactivated successfully.
Oct  8 05:45:41 np0005475493 podman[87990]: 2025-10-08 09:45:41.455990195 +0000 UTC m=+0.063333833 container remove 36c83e3c93384f582896714b60cd9638a1f4af38bbf8aa48be133d29b0cb4579 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_elbakyan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Oct  8 05:45:41 np0005475493 systemd[1]: libpod-conmon-36c83e3c93384f582896714b60cd9638a1f4af38bbf8aa48be133d29b0cb4579.scope: Deactivated successfully.
Oct  8 05:45:41 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:45:41 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:41 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:45:41 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:41 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev 84ae7ebc-c8b9-4226-9ef4-d352c70615bc (Updating rgw.rgw deployment (+3 -> 3))
Oct  8 05:45:41 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.pgshil", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct  8 05:45:41 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.pgshil", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  8 05:45:41 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.pgshil", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  8 05:45:41 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Oct  8 05:45:41 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:41 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:45:41 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:45:41 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.pgshil on compute-2
Oct  8 05:45:41 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.pgshil on compute-2
Oct  8 05:45:41 np0005475493 ceph-mgr[73869]: [progress INFO root] Writing back 12 completed events
Oct  8 05:45:41 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  8 05:45:41 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:41 np0005475493 ceph-mgr[73869]: [progress WARNING root] Starting Global Recovery Event,57 pgs not in active + clean state
Oct  8 05:45:42 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Oct  8 05:45:42 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Oct  8 05:45:42 np0005475493 python3[88029]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.swlvov/server_addr 192.168.122.101#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:42 np0005475493 podman[88030]: 2025-10-08 09:45:42.30789712 +0000 UTC m=+0.039337987 container create b975e5337497efbc70ca4368aef9ea909a256134619cd7a365a10ea70c19e02e (image=quay.io/ceph/ceph:v19, name=bold_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:45:42 np0005475493 systemd[1]: Started libpod-conmon-b975e5337497efbc70ca4368aef9ea909a256134619cd7a365a10ea70c19e02e.scope.
Oct  8 05:45:42 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:42 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:42 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.pgshil", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  8 05:45:42 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.pgshil", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  8 05:45:42 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:42 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:42 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:42 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d032fc449a728361b35c05ad4c53d340ad8911472f4cb93690b9759f8109bd7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:42 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d032fc449a728361b35c05ad4c53d340ad8911472f4cb93690b9759f8109bd7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:42 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d032fc449a728361b35c05ad4c53d340ad8911472f4cb93690b9759f8109bd7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:42 np0005475493 podman[88030]: 2025-10-08 09:45:42.291646911 +0000 UTC m=+0.023087798 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:42 np0005475493 podman[88030]: 2025-10-08 09:45:42.386719378 +0000 UTC m=+0.118160255 container init b975e5337497efbc70ca4368aef9ea909a256134619cd7a365a10ea70c19e02e (image=quay.io/ceph/ceph:v19, name=bold_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Oct  8 05:45:42 np0005475493 podman[88030]: 2025-10-08 09:45:42.392939276 +0000 UTC m=+0.124380153 container start b975e5337497efbc70ca4368aef9ea909a256134619cd7a365a10ea70c19e02e (image=quay.io/ceph/ceph:v19, name=bold_williamson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:45:42 np0005475493 podman[88030]: 2025-10-08 09:45:42.395630062 +0000 UTC m=+0.127070929 container attach b975e5337497efbc70ca4368aef9ea909a256134619cd7a365a10ea70c19e02e (image=quay.io/ceph/ceph:v19, name=bold_williamson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  8 05:45:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.swlvov/server_addr}] v 0)
Oct  8 05:45:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1595921047' entity='client.admin' 
Oct  8 05:45:42 np0005475493 systemd[1]: libpod-b975e5337497efbc70ca4368aef9ea909a256134619cd7a365a10ea70c19e02e.scope: Deactivated successfully.
Oct  8 05:45:42 np0005475493 podman[88030]: 2025-10-08 09:45:42.765999309 +0000 UTC m=+0.497440206 container died b975e5337497efbc70ca4368aef9ea909a256134619cd7a365a10ea70c19e02e (image=quay.io/ceph/ceph:v19, name=bold_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  8 05:45:42 np0005475493 systemd[1]: var-lib-containers-storage-overlay-2d032fc449a728361b35c05ad4c53d340ad8911472f4cb93690b9759f8109bd7-merged.mount: Deactivated successfully.
Oct  8 05:45:42 np0005475493 podman[88030]: 2025-10-08 09:45:42.803839468 +0000 UTC m=+0.535280345 container remove b975e5337497efbc70ca4368aef9ea909a256134619cd7a365a10ea70c19e02e (image=quay.io/ceph/ceph:v19, name=bold_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  8 05:45:42 np0005475493 systemd[1]: libpod-conmon-b975e5337497efbc70ca4368aef9ea909a256134619cd7a365a10ea70c19e02e.scope: Deactivated successfully.
Oct  8 05:45:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v98: 193 pgs: 57 peering, 136 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:45:43 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Oct  8 05:45:43 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Oct  8 05:45:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:45:43 np0005475493 ceph-mon[73572]: Deploying daemon rgw.rgw.compute-2.pgshil on compute-2
Oct  8 05:45:43 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/1595921047' entity='client.admin' 
Oct  8 05:45:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:45:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:45:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct  8 05:45:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.aaugis", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct  8 05:45:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.aaugis", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  8 05:45:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.aaugis", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  8 05:45:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Oct  8 05:45:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:45:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:45:43 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.aaugis on compute-1
Oct  8 05:45:43 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.aaugis on compute-1
Oct  8 05:45:43 np0005475493 python3[88109]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.mtagwx/server_addr 192.168.122.102#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:43 np0005475493 podman[88110]: 2025-10-08 09:45:43.787749838 +0000 UTC m=+0.034397698 container create 1c50d3257d007e84467d629cceef6f800aefe4ea23729a5b250cc2353151ed76 (image=quay.io/ceph/ceph:v19, name=strange_dirac, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:45:43 np0005475493 systemd[1]: Started libpod-conmon-1c50d3257d007e84467d629cceef6f800aefe4ea23729a5b250cc2353151ed76.scope.
Oct  8 05:45:43 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:43 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9594b96a8dc5717da9464ca3c313359c351d6b8b01b8b59229284948a0742c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:43 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9594b96a8dc5717da9464ca3c313359c351d6b8b01b8b59229284948a0742c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:43 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9594b96a8dc5717da9464ca3c313359c351d6b8b01b8b59229284948a0742c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:43 np0005475493 podman[88110]: 2025-10-08 09:45:43.860480871 +0000 UTC m=+0.107128751 container init 1c50d3257d007e84467d629cceef6f800aefe4ea23729a5b250cc2353151ed76 (image=quay.io/ceph/ceph:v19, name=strange_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:45:43 np0005475493 podman[88110]: 2025-10-08 09:45:43.867382472 +0000 UTC m=+0.114030322 container start 1c50d3257d007e84467d629cceef6f800aefe4ea23729a5b250cc2353151ed76 (image=quay.io/ceph/ceph:v19, name=strange_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  8 05:45:43 np0005475493 podman[88110]: 2025-10-08 09:45:43.773895706 +0000 UTC m=+0.020543576 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:43 np0005475493 podman[88110]: 2025-10-08 09:45:43.870874314 +0000 UTC m=+0.117522184 container attach 1c50d3257d007e84467d629cceef6f800aefe4ea23729a5b250cc2353151ed76 (image=quay.io/ceph/ceph:v19, name=strange_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct  8 05:45:44 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Oct  8 05:45:44 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Oct  8 05:45:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.mtagwx/server_addr}] v 0)
Oct  8 05:45:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/277292669' entity='client.admin' 
Oct  8 05:45:44 np0005475493 systemd[1]: libpod-1c50d3257d007e84467d629cceef6f800aefe4ea23729a5b250cc2353151ed76.scope: Deactivated successfully.
Oct  8 05:45:44 np0005475493 podman[88110]: 2025-10-08 09:45:44.229530257 +0000 UTC m=+0.476178107 container died 1c50d3257d007e84467d629cceef6f800aefe4ea23729a5b250cc2353151ed76 (image=quay.io/ceph/ceph:v19, name=strange_dirac, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  8 05:45:44 np0005475493 systemd[1]: var-lib-containers-storage-overlay-da9594b96a8dc5717da9464ca3c313359c351d6b8b01b8b59229284948a0742c-merged.mount: Deactivated successfully.
Oct  8 05:45:44 np0005475493 podman[88110]: 2025-10-08 09:45:44.259871186 +0000 UTC m=+0.506519036 container remove 1c50d3257d007e84467d629cceef6f800aefe4ea23729a5b250cc2353151ed76 (image=quay.io/ceph/ceph:v19, name=strange_dirac, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:45:44 np0005475493 systemd[1]: libpod-conmon-1c50d3257d007e84467d629cceef6f800aefe4ea23729a5b250cc2353151ed76.scope: Deactivated successfully.
Oct  8 05:45:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Oct  8 05:45:44 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:44 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:44 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:44 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.aaugis", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  8 05:45:44 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.aaugis", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  8 05:45:44 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:44 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/277292669' entity='client.admin' 
Oct  8 05:45:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Oct  8 05:45:44 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Oct  8 05:45:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Oct  8 05:45:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  8 05:45:44 np0005475493 python3[88186]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:44 np0005475493 podman[88187]: 2025-10-08 09:45:44.622816586 +0000 UTC m=+0.034325887 container create b7d727fdcd8eeb45d37d714db92dbd2fcc957f31c016c023bb229058403abbf0 (image=quay.io/ceph/ceph:v19, name=hardcore_proskuriakova, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:45:44 np0005475493 systemd[1]: Started libpod-conmon-b7d727fdcd8eeb45d37d714db92dbd2fcc957f31c016c023bb229058403abbf0.scope.
Oct  8 05:45:44 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94482dda6eb18080e9a2cb1d41ab72002a31e3633b00ede41b4129b913b8a75a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94482dda6eb18080e9a2cb1d41ab72002a31e3633b00ede41b4129b913b8a75a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94482dda6eb18080e9a2cb1d41ab72002a31e3633b00ede41b4129b913b8a75a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:44 np0005475493 podman[88187]: 2025-10-08 09:45:44.687014066 +0000 UTC m=+0.098523387 container init b7d727fdcd8eeb45d37d714db92dbd2fcc957f31c016c023bb229058403abbf0 (image=quay.io/ceph/ceph:v19, name=hardcore_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct  8 05:45:44 np0005475493 podman[88187]: 2025-10-08 09:45:44.69244182 +0000 UTC m=+0.103951121 container start b7d727fdcd8eeb45d37d714db92dbd2fcc957f31c016c023bb229058403abbf0 (image=quay.io/ceph/ceph:v19, name=hardcore_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct  8 05:45:44 np0005475493 podman[88187]: 2025-10-08 09:45:44.696593453 +0000 UTC m=+0.108102754 container attach b7d727fdcd8eeb45d37d714db92dbd2fcc957f31c016c023bb229058403abbf0 (image=quay.io/ceph/ceph:v19, name=hardcore_proskuriakova, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  8 05:45:44 np0005475493 podman[88187]: 2025-10-08 09:45:44.607961322 +0000 UTC m=+0.019470643 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:44 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 36 pg[8.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v100: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:45:45 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3257796446' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct  8 05:45:45 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.wdkdxi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.wdkdxi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.wdkdxi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:45:45 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.wdkdxi on compute-0
Oct  8 05:45:45 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.wdkdxi on compute-0
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Oct  8 05:45:45 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 37 pg[8.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: Deploying daemon rgw.rgw.compute-1.aaugis on compute-1
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.102:0/947715731' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/3257796446' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.wdkdxi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.wdkdxi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3257796446' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct  8 05:45:45 np0005475493 hardcore_proskuriakova[88202]: module 'dashboard' is already disabled
Oct  8 05:45:45 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.ixicfj(active, since 2m), standbys: compute-2.mtagwx, compute-1.swlvov
Oct  8 05:45:45 np0005475493 systemd[1]: libpod-b7d727fdcd8eeb45d37d714db92dbd2fcc957f31c016c023bb229058403abbf0.scope: Deactivated successfully.
Oct  8 05:45:45 np0005475493 podman[88187]: 2025-10-08 09:45:45.537210756 +0000 UTC m=+0.948720057 container died b7d727fdcd8eeb45d37d714db92dbd2fcc957f31c016c023bb229058403abbf0 (image=quay.io/ceph/ceph:v19, name=hardcore_proskuriakova, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  8 05:45:45 np0005475493 systemd[1]: var-lib-containers-storage-overlay-94482dda6eb18080e9a2cb1d41ab72002a31e3633b00ede41b4129b913b8a75a-merged.mount: Deactivated successfully.
Oct  8 05:45:45 np0005475493 podman[88187]: 2025-10-08 09:45:45.572058648 +0000 UTC m=+0.983567949 container remove b7d727fdcd8eeb45d37d714db92dbd2fcc957f31c016c023bb229058403abbf0 (image=quay.io/ceph/ceph:v19, name=hardcore_proskuriakova, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:45:45 np0005475493 systemd[1]: libpod-conmon-b7d727fdcd8eeb45d37d714db92dbd2fcc957f31c016c023bb229058403abbf0.scope: Deactivated successfully.
Oct  8 05:45:45 np0005475493 podman[88361]: 2025-10-08 09:45:45.786389333 +0000 UTC m=+0.047862437 container create 1ef22460d184a24fb50b52bf50cfb3bcae5c05c090d4be77618b8e54e1f5a3ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mclean, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:45:45 np0005475493 systemd[1]: Started libpod-conmon-1ef22460d184a24fb50b52bf50cfb3bcae5c05c090d4be77618b8e54e1f5a3ff.scope.
Oct  8 05:45:45 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:45 np0005475493 podman[88361]: 2025-10-08 09:45:45.758414302 +0000 UTC m=+0.019887416 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:45:45 np0005475493 podman[88361]: 2025-10-08 09:45:45.86407361 +0000 UTC m=+0.125546714 container init 1ef22460d184a24fb50b52bf50cfb3bcae5c05c090d4be77618b8e54e1f5a3ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mclean, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:45:45 np0005475493 podman[88361]: 2025-10-08 09:45:45.870965139 +0000 UTC m=+0.132438233 container start 1ef22460d184a24fb50b52bf50cfb3bcae5c05c090d4be77618b8e54e1f5a3ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:45:45 np0005475493 python3[88360]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:45 np0005475493 sleepy_mclean[88378]: 167 167
Oct  8 05:45:45 np0005475493 podman[88361]: 2025-10-08 09:45:45.874119205 +0000 UTC m=+0.135592329 container attach 1ef22460d184a24fb50b52bf50cfb3bcae5c05c090d4be77618b8e54e1f5a3ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mclean, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:45:45 np0005475493 systemd[1]: libpod-1ef22460d184a24fb50b52bf50cfb3bcae5c05c090d4be77618b8e54e1f5a3ff.scope: Deactivated successfully.
Oct  8 05:45:45 np0005475493 podman[88361]: 2025-10-08 09:45:45.875216769 +0000 UTC m=+0.136689863 container died 1ef22460d184a24fb50b52bf50cfb3bcae5c05c090d4be77618b8e54e1f5a3ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mclean, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:45:45 np0005475493 systemd[1]: var-lib-containers-storage-overlay-8e744f99c274767563f927a72ec34de532bd3de066f2d2fc98c1ee34bea0cb48-merged.mount: Deactivated successfully.
Oct  8 05:45:45 np0005475493 podman[88361]: 2025-10-08 09:45:45.918413484 +0000 UTC m=+0.179886578 container remove 1ef22460d184a24fb50b52bf50cfb3bcae5c05c090d4be77618b8e54e1f5a3ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mclean, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  8 05:45:45 np0005475493 systemd[1]: libpod-conmon-1ef22460d184a24fb50b52bf50cfb3bcae5c05c090d4be77618b8e54e1f5a3ff.scope: Deactivated successfully.
Oct  8 05:45:45 np0005475493 podman[88383]: 2025-10-08 09:45:45.948373657 +0000 UTC m=+0.061768982 container create b32eaca0f8078b83ac41a0335a4c5487d5becc61a67adb7cd9ee6ff0d105d419 (image=quay.io/ceph/ceph:v19, name=youthful_bardeen, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:45:45 np0005475493 systemd[1]: Started libpod-conmon-b32eaca0f8078b83ac41a0335a4c5487d5becc61a67adb7cd9ee6ff0d105d419.scope.
Oct  8 05:45:45 np0005475493 systemd[1]: Reloading.
Oct  8 05:45:46 np0005475493 podman[88383]: 2025-10-08 09:45:45.929203452 +0000 UTC m=+0.042598797 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:46 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:45:46 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:45:46 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Oct  8 05:45:46 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Oct  8 05:45:46 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:46 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3489bf9e0d9f1c165d1f2175d8c75494fc562480b67202ef7ef8da0e8ca50f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:46 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3489bf9e0d9f1c165d1f2175d8c75494fc562480b67202ef7ef8da0e8ca50f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:46 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3489bf9e0d9f1c165d1f2175d8c75494fc562480b67202ef7ef8da0e8ca50f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:46 np0005475493 podman[88383]: 2025-10-08 09:45:46.27821436 +0000 UTC m=+0.391609675 container init b32eaca0f8078b83ac41a0335a4c5487d5becc61a67adb7cd9ee6ff0d105d419 (image=quay.io/ceph/ceph:v19, name=youthful_bardeen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:45:46 np0005475493 podman[88383]: 2025-10-08 09:45:46.292211626 +0000 UTC m=+0.405606951 container start b32eaca0f8078b83ac41a0335a4c5487d5becc61a67adb7cd9ee6ff0d105d419 (image=quay.io/ceph/ceph:v19, name=youthful_bardeen, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:45:46 np0005475493 podman[88383]: 2025-10-08 09:45:46.300004373 +0000 UTC m=+0.413399708 container attach b32eaca0f8078b83ac41a0335a4c5487d5becc61a67adb7cd9ee6ff0d105d419 (image=quay.io/ceph/ceph:v19, name=youthful_bardeen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:45:46 np0005475493 systemd[1]: Reloading.
Oct  8 05:45:46 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:45:46 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Oct  8 05:45:46 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 38 pg[9.0( empty local-lis/les=0/0 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: Deploying daemon rgw.rgw.compute-0.wdkdxi on compute-0
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/3257796446' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.102:0/4200026288' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.101:0/1900470648' entity='client.rgw.rgw.compute-1.aaugis' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  8 05:45:46 np0005475493 systemd[1]: Starting Ceph rgw.rgw.compute-0.wdkdxi for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/658446886' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct  8 05:45:46 np0005475493 podman[88556]: 2025-10-08 09:45:46.816951974 +0000 UTC m=+0.046517367 container create c6c7ccd8691da02c370a2b1b8f6e81e0e8a2c78d520d1f38bf935f7230fcff70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-rgw-rgw-compute-0-wdkdxi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:45:46 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68787ad48e9458d259bfa260d4ff4667ae81a05eeff6709f8952ea2e53d3187/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:46 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68787ad48e9458d259bfa260d4ff4667ae81a05eeff6709f8952ea2e53d3187/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:46 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68787ad48e9458d259bfa260d4ff4667ae81a05eeff6709f8952ea2e53d3187/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:46 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68787ad48e9458d259bfa260d4ff4667ae81a05eeff6709f8952ea2e53d3187/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.wdkdxi supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:46 np0005475493 podman[88556]: 2025-10-08 09:45:46.869388011 +0000 UTC m=+0.098953414 container init c6c7ccd8691da02c370a2b1b8f6e81e0e8a2c78d520d1f38bf935f7230fcff70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-rgw-rgw-compute-0-wdkdxi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:45:46 np0005475493 podman[88556]: 2025-10-08 09:45:46.87363584 +0000 UTC m=+0.103201233 container start c6c7ccd8691da02c370a2b1b8f6e81e0e8a2c78d520d1f38bf935f7230fcff70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-rgw-rgw-compute-0-wdkdxi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  8 05:45:46 np0005475493 bash[88556]: c6c7ccd8691da02c370a2b1b8f6e81e0e8a2c78d520d1f38bf935f7230fcff70
Oct  8 05:45:46 np0005475493 podman[88556]: 2025-10-08 09:45:46.79874905 +0000 UTC m=+0.028314493 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:45:46 np0005475493 systemd[1]: Started Ceph rgw.rgw.compute-0.wdkdxi for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:45:46 np0005475493 radosgw[88577]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct  8 05:45:46 np0005475493 radosgw[88577]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Oct  8 05:45:46 np0005475493 radosgw[88577]: framework: beast
Oct  8 05:45:46 np0005475493 radosgw[88577]: framework conf key: endpoint, val: 192.168.122.100:8082
Oct  8 05:45:46 np0005475493 radosgw[88577]: init_numa not setting numa affinity
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:45:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v103: 195 pgs: 2 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:46 np0005475493 ceph-mgr[73869]: [progress INFO root] complete: finished ev 84ae7ebc-c8b9-4226-9ef4-d352c70615bc (Updating rgw.rgw deployment (+3 -> 3))
Oct  8 05:45:46 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event 84ae7ebc-c8b9-4226-9ef4-d352c70615bc (Updating rgw.rgw deployment (+3 -> 3)) in 5 seconds
Oct  8 05:45:46 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  8 05:45:46 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct  8 05:45:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:47 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev 943a6973-6405-40f5-87ab-42ef16849f0e (Updating node-exporter deployment (+3 -> 3))
Oct  8 05:45:47 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Oct  8 05:45:47 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Oct  8 05:45:47 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.0 deep-scrub starts
Oct  8 05:45:47 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.0 deep-scrub ok
Oct  8 05:45:47 np0005475493 systemd[1]: Reloading.
Oct  8 05:45:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Oct  8 05:45:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  8 05:45:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  8 05:45:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Oct  8 05:45:47 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Oct  8 05:45:47 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 39 pg[9.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:47 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:45:47 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:45:47 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/658446886' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct  8 05:45:47 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:47 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:47 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:47 np0005475493 ceph-mon[73572]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  8 05:45:47 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:47 np0005475493 ceph-mon[73572]: from='mgr.14122 192.168.122.100:0/2108506543' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:47 np0005475493 ceph-mon[73572]: Deploying daemon node-exporter.compute-0 on compute-0
Oct  8 05:45:47 np0005475493 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  8 05:45:47 np0005475493 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  8 05:45:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/658446886' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct  8 05:45:47 np0005475493 ceph-mgr[73869]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  8 05:45:47 np0005475493 ceph-mgr[73869]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct  8 05:45:47 np0005475493 ceph-mgr[73869]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct  8 05:45:47 np0005475493 ceph-mgr[73869]: mgr respawn  1: '-n'
Oct  8 05:45:47 np0005475493 ceph-mgr[73869]: mgr respawn  2: 'mgr.compute-0.ixicfj'
Oct  8 05:45:47 np0005475493 ceph-mgr[73869]: mgr respawn  3: '-f'
Oct  8 05:45:47 np0005475493 ceph-mgr[73869]: mgr respawn  4: '--setuser'
Oct  8 05:45:47 np0005475493 ceph-mgr[73869]: mgr respawn  5: 'ceph'
Oct  8 05:45:47 np0005475493 ceph-mgr[73869]: mgr respawn  6: '--setgroup'
Oct  8 05:45:47 np0005475493 ceph-mgr[73869]: mgr respawn  7: 'ceph'
Oct  8 05:45:47 np0005475493 ceph-mgr[73869]: mgr respawn  8: '--default-log-to-file=false'
Oct  8 05:45:47 np0005475493 ceph-mgr[73869]: mgr respawn  9: '--default-log-to-journald=true'
Oct  8 05:45:47 np0005475493 ceph-mgr[73869]: mgr respawn  10: '--default-log-to-stderr=false'
Oct  8 05:45:47 np0005475493 ceph-mgr[73869]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct  8 05:45:47 np0005475493 ceph-mgr[73869]: mgr respawn  exe_path /proc/self/exe
Oct  8 05:45:47 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.ixicfj(active, since 2m), standbys: compute-2.mtagwx, compute-1.swlvov
Oct  8 05:45:47 np0005475493 podman[88383]: 2025-10-08 09:45:47.573015585 +0000 UTC m=+1.686410930 container died b32eaca0f8078b83ac41a0335a4c5487d5becc61a67adb7cd9ee6ff0d105d419 (image=quay.io/ceph/ceph:v19, name=youthful_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:45:47 np0005475493 systemd[1]: libpod-b32eaca0f8078b83ac41a0335a4c5487d5becc61a67adb7cd9ee6ff0d105d419.scope: Deactivated successfully.
Oct  8 05:45:47 np0005475493 systemd[1]: session-32.scope: Deactivated successfully.
Oct  8 05:45:47 np0005475493 systemd[1]: session-24.scope: Deactivated successfully.
Oct  8 05:45:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ignoring --setuser ceph since I am not root
Oct  8 05:45:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ignoring --setgroup ceph since I am not root
Oct  8 05:45:47 np0005475493 systemd[1]: var-lib-containers-storage-overlay-ec3489bf9e0d9f1c165d1f2175d8c75494fc562480b67202ef7ef8da0e8ca50f-merged.mount: Deactivated successfully.
Oct  8 05:45:47 np0005475493 systemd[1]: session-25.scope: Deactivated successfully.
Oct  8 05:45:47 np0005475493 systemd[1]: session-27.scope: Deactivated successfully.
Oct  8 05:45:47 np0005475493 systemd[1]: session-26.scope: Deactivated successfully.
Oct  8 05:45:47 np0005475493 systemd[1]: session-22.scope: Deactivated successfully.
Oct  8 05:45:47 np0005475493 systemd[1]: session-33.scope: Deactivated successfully.
Oct  8 05:45:47 np0005475493 systemd[1]: session-29.scope: Deactivated successfully.
Oct  8 05:45:47 np0005475493 systemd[1]: session-31.scope: Deactivated successfully.
Oct  8 05:45:47 np0005475493 systemd[1]: session-28.scope: Deactivated successfully.
Oct  8 05:45:47 np0005475493 ceph-mgr[73869]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  8 05:45:47 np0005475493 systemd[1]: session-30.scope: Deactivated successfully.
Oct  8 05:45:47 np0005475493 ceph-mgr[73869]: pidfile_write: ignore empty --pid-file
Oct  8 05:45:47 np0005475493 podman[88383]: 2025-10-08 09:45:47.68253242 +0000 UTC m=+1.795927735 container remove b32eaca0f8078b83ac41a0335a4c5487d5becc61a67adb7cd9ee6ff0d105d419 (image=quay.io/ceph/ceph:v19, name=youthful_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:45:47 np0005475493 systemd-logind[798]: Removed session 29.
Oct  8 05:45:47 np0005475493 systemd[1]: libpod-conmon-b32eaca0f8078b83ac41a0335a4c5487d5becc61a67adb7cd9ee6ff0d105d419.scope: Deactivated successfully.
Oct  8 05:45:47 np0005475493 systemd-logind[798]: Removed session 22.
Oct  8 05:45:47 np0005475493 systemd-logind[798]: Removed session 31.
Oct  8 05:45:47 np0005475493 systemd-logind[798]: Removed session 33.
Oct  8 05:45:47 np0005475493 systemd-logind[798]: Removed session 32.
Oct  8 05:45:47 np0005475493 systemd-logind[798]: Removed session 24.
Oct  8 05:45:47 np0005475493 systemd-logind[798]: Session 26 logged out. Waiting for processes to exit.
Oct  8 05:45:47 np0005475493 systemd-logind[798]: Session 34 logged out. Waiting for processes to exit.
Oct  8 05:45:47 np0005475493 systemd-logind[798]: Session 25 logged out. Waiting for processes to exit.
Oct  8 05:45:47 np0005475493 systemd-logind[798]: Session 28 logged out. Waiting for processes to exit.
Oct  8 05:45:47 np0005475493 systemd-logind[798]: Session 30 logged out. Waiting for processes to exit.
Oct  8 05:45:47 np0005475493 systemd-logind[798]: Session 27 logged out. Waiting for processes to exit.
Oct  8 05:45:47 np0005475493 systemd-logind[798]: Removed session 25.
Oct  8 05:45:47 np0005475493 systemd-logind[798]: Removed session 27.
Oct  8 05:45:47 np0005475493 systemd-logind[798]: Removed session 26.
Oct  8 05:45:47 np0005475493 systemd-logind[798]: Removed session 28.
Oct  8 05:45:47 np0005475493 systemd-logind[798]: Removed session 30.
Oct  8 05:45:47 np0005475493 systemd[1]: Reloading.
Oct  8 05:45:47 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'alerts'
Oct  8 05:45:47 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:45:47 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:45:47 np0005475493 ceph-mgr[73869]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  8 05:45:47 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'balancer'
Oct  8 05:45:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:47.824+0000 7f359c145140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  8 05:45:47 np0005475493 ceph-mgr[73869]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  8 05:45:47 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'cephadm'
Oct  8 05:45:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:47.910+0000 7f359c145140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  8 05:45:47 np0005475493 systemd[1]: Starting Ceph node-exporter.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:45:48 np0005475493 python3[89396]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:48 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Oct  8 05:45:48 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Oct  8 05:45:48 np0005475493 bash[89449]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Oct  8 05:45:48 np0005475493 podman[89438]: 2025-10-08 09:45:48.148045225 +0000 UTC m=+0.045740604 container create b1564ce47afdf437a22dfc4e23b8735ffc224b8264db47d08fe8e7c95e7902b7 (image=quay.io/ceph/ceph:v19, name=competent_babbage, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct  8 05:45:48 np0005475493 systemd[1]: Started libpod-conmon-b1564ce47afdf437a22dfc4e23b8735ffc224b8264db47d08fe8e7c95e7902b7.scope.
Oct  8 05:45:48 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:48 np0005475493 podman[89438]: 2025-10-08 09:45:48.122947241 +0000 UTC m=+0.020642640 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:48 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8bd965d281b1de97d6090e780ed71c35986901d13a335527fc6961a2bdd0c7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:48 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8bd965d281b1de97d6090e780ed71c35986901d13a335527fc6961a2bdd0c7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:48 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8bd965d281b1de97d6090e780ed71c35986901d13a335527fc6961a2bdd0c7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:48 np0005475493 podman[89438]: 2025-10-08 09:45:48.23163455 +0000 UTC m=+0.129329959 container init b1564ce47afdf437a22dfc4e23b8735ffc224b8264db47d08fe8e7c95e7902b7 (image=quay.io/ceph/ceph:v19, name=competent_babbage, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:45:48 np0005475493 podman[89438]: 2025-10-08 09:45:48.237699535 +0000 UTC m=+0.135394914 container start b1564ce47afdf437a22dfc4e23b8735ffc224b8264db47d08fe8e7c95e7902b7 (image=quay.io/ceph/ceph:v19, name=competent_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:45:48 np0005475493 podman[89438]: 2025-10-08 09:45:48.241939224 +0000 UTC m=+0.139634603 container attach b1564ce47afdf437a22dfc4e23b8735ffc224b8264db47d08fe8e7c95e7902b7 (image=quay.io/ceph/ceph:v19, name=competent_babbage, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct  8 05:45:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:45:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Oct  8 05:45:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Oct  8 05:45:48 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Oct  8 05:45:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Oct  8 05:45:48 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4157537618' entity='client.rgw.rgw.compute-0.wdkdxi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  8 05:45:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Oct  8 05:45:48 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  8 05:45:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Oct  8 05:45:48 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  8 05:45:48 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/658446886' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct  8 05:45:48 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/4157537618' entity='client.rgw.rgw.compute-0.wdkdxi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  8 05:45:48 np0005475493 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  8 05:45:48 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.102:0/4200026288' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  8 05:45:48 np0005475493 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  8 05:45:48 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.101:0/1900470648' entity='client.rgw.rgw.compute-1.aaugis' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  8 05:45:48 np0005475493 bash[89449]: Getting image source signatures
Oct  8 05:45:48 np0005475493 bash[89449]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Oct  8 05:45:48 np0005475493 bash[89449]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Oct  8 05:45:48 np0005475493 bash[89449]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Oct  8 05:45:48 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'crash'
Oct  8 05:45:48 np0005475493 ceph-mgr[73869]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  8 05:45:48 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'dashboard'
Oct  8 05:45:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:48.719+0000 7f359c145140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  8 05:45:49 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Oct  8 05:45:49 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Oct  8 05:45:49 np0005475493 bash[89449]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Oct  8 05:45:49 np0005475493 bash[89449]: Writing manifest to image destination
Oct  8 05:45:49 np0005475493 podman[89449]: 2025-10-08 09:45:49.21486677 +0000 UTC m=+1.097011295 container create 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:45:49 np0005475493 podman[89449]: 2025-10-08 09:45:49.200078129 +0000 UTC m=+1.082222684 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Oct  8 05:45:49 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'devicehealth'
Oct  8 05:45:49 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54af9510d66390823c3b362131dbb950b9145f4e5b56d1ab94c9e3f0f29ca9ac/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:49 np0005475493 podman[89449]: 2025-10-08 09:45:49.270162073 +0000 UTC m=+1.152306618 container init 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:45:49 np0005475493 podman[89449]: 2025-10-08 09:45:49.274628029 +0000 UTC m=+1.156772554 container start 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:45:49 np0005475493 bash[89449]: 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f
Oct  8 05:45:49 np0005475493 systemd[1]: Started Ceph node-exporter.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.287Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.287Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.287Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.287Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.288Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.288Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=arp
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=bcache
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=bonding
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=btrfs
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=conntrack
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=cpu
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=diskstats
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=dmi
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=edac
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=entropy
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=filefd
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=filesystem
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=hwmon
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=infiniband
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=ipvs
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=loadavg
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=mdadm
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=meminfo
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=netclass
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=netdev
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=netstat
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=nfs
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=nfsd
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=nvme
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=os
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=pressure
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=rapl
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=schedstat
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=selinux
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=sockstat
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=softnet
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=stat
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=tapestats
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=textfile
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=thermal_zone
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=time
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=uname
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=vmstat
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=xfs
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=node_exporter.go:117 level=info collector=zfs
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.289Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[89569]: ts=2025-10-08T09:45:49.290Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Oct  8 05:45:49 np0005475493 ceph-mgr[73869]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  8 05:45:49 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'diskprediction_local'
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:49.324+0000 7f359c145140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  8 05:45:49 np0005475493 systemd[1]: session-34.scope: Deactivated successfully.
Oct  8 05:45:49 np0005475493 systemd[1]: session-34.scope: Consumed 26.085s CPU time.
Oct  8 05:45:49 np0005475493 systemd-logind[798]: Removed session 34.
Oct  8 05:45:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Oct  8 05:45:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4157537618' entity='client.rgw.rgw.compute-0.wdkdxi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  8 05:45:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  8 05:45:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  8 05:45:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Oct  8 05:45:49 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:  from numpy import show_config as show_numpy_config
Oct  8 05:45:49 np0005475493 ceph-mgr[73869]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:49.488+0000 7f359c145140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  8 05:45:49 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'influx'
Oct  8 05:45:49 np0005475493 ceph-mgr[73869]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  8 05:45:49 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'insights'
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:49.557+0000 7f359c145140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  8 05:45:49 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/4157537618' entity='client.rgw.rgw.compute-0.wdkdxi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  8 05:45:49 np0005475493 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  8 05:45:49 np0005475493 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  8 05:45:49 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'iostat'
Oct  8 05:45:49 np0005475493 ceph-mgr[73869]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  8 05:45:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:49.736+0000 7f359c145140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  8 05:45:49 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'k8sevents'
Oct  8 05:45:50 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Oct  8 05:45:50 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Oct  8 05:45:50 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'localpool'
Oct  8 05:45:50 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'mds_autoscaler'
Oct  8 05:45:50 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'mirroring'
Oct  8 05:45:50 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Oct  8 05:45:50 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Oct  8 05:45:50 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Oct  8 05:45:50 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Oct  8 05:45:50 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4157537618' entity='client.rgw.rgw.compute-0.wdkdxi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  8 05:45:50 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Oct  8 05:45:50 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  8 05:45:50 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Oct  8 05:45:50 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  8 05:45:50 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'nfs'
Oct  8 05:45:50 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/4157537618' entity='client.rgw.rgw.compute-0.wdkdxi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  8 05:45:50 np0005475493 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  8 05:45:50 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.102:0/4200026288' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  8 05:45:50 np0005475493 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  8 05:45:50 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.101:0/1900470648' entity='client.rgw.rgw.compute-1.aaugis' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  8 05:45:50 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 42 pg[11.0( empty local-lis/les=0/0 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [1] r=0 lpr=42 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:45:50 np0005475493 ceph-mgr[73869]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  8 05:45:50 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'orchestrator'
Oct  8 05:45:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:50.744+0000 7f359c145140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  8 05:45:50 np0005475493 ceph-mgr[73869]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  8 05:45:50 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'osd_perf_query'
Oct  8 05:45:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:50.971+0000 7f359c145140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  8 05:45:51 np0005475493 ceph-mgr[73869]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  8 05:45:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:51.044+0000 7f359c145140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  8 05:45:51 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'osd_support'
Oct  8 05:45:51 np0005475493 ceph-mgr[73869]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  8 05:45:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:51.110+0000 7f359c145140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  8 05:45:51 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'pg_autoscaler'
Oct  8 05:45:51 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Oct  8 05:45:51 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Oct  8 05:45:51 np0005475493 ceph-mgr[73869]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  8 05:45:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:51.194+0000 7f359c145140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  8 05:45:51 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'progress'
Oct  8 05:45:51 np0005475493 ceph-mgr[73869]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  8 05:45:51 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'prometheus'
Oct  8 05:45:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:51.272+0000 7f359c145140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  8 05:45:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Oct  8 05:45:51 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4157537618' entity='client.rgw.rgw.compute-0.wdkdxi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  8 05:45:51 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  8 05:45:51 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  8 05:45:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Oct  8 05:45:51 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Oct  8 05:45:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Oct  8 05:45:51 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  8 05:45:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Oct  8 05:45:51 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4157537618' entity='client.rgw.rgw.compute-0.wdkdxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  8 05:45:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Oct  8 05:45:51 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  8 05:45:51 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 43 pg[11.0( empty local-lis/les=42/43 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [1] r=0 lpr=42 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:45:51 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/4157537618' entity='client.rgw.rgw.compute-0.wdkdxi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  8 05:45:51 np0005475493 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  8 05:45:51 np0005475493 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  8 05:45:51 np0005475493 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  8 05:45:51 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/4157537618' entity='client.rgw.rgw.compute-0.wdkdxi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  8 05:45:51 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.102:0/4200026288' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  8 05:45:51 np0005475493 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  8 05:45:51 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.101:0/1900470648' entity='client.rgw.rgw.compute-1.aaugis' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  8 05:45:51 np0005475493 ceph-mgr[73869]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  8 05:45:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:51.642+0000 7f359c145140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  8 05:45:51 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'rbd_support'
Oct  8 05:45:51 np0005475493 ceph-mgr[73869]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  8 05:45:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:51.744+0000 7f359c145140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  8 05:45:51 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'restful'
Oct  8 05:45:51 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'rgw'
Oct  8 05:45:52 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.7 deep-scrub starts
Oct  8 05:45:52 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.7 deep-scrub ok
Oct  8 05:45:52 np0005475493 ceph-mgr[73869]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  8 05:45:52 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'rook'
Oct  8 05:45:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:52.176+0000 7f359c145140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  8 05:45:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Oct  8 05:45:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  8 05:45:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4157537618' entity='client.rgw.rgw.compute-0.wdkdxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  8 05:45:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  8 05:45:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Oct  8 05:45:52 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Oct  8 05:45:52 np0005475493 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-1.aaugis' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  8 05:45:52 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/4157537618' entity='client.rgw.rgw.compute-0.wdkdxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  8 05:45:52 np0005475493 ceph-mon[73572]: from='client.? ' entity='client.rgw.rgw.compute-2.pgshil' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  8 05:45:52 np0005475493 radosgw[88577]: v1 topic migration: starting v1 topic migration..
Oct  8 05:45:52 np0005475493 radosgw[88577]: LDAP not started since no server URIs were provided in the configuration.
Oct  8 05:45:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-rgw-rgw-compute-0-wdkdxi[88573]: 2025-10-08T09:45:52.670+0000 7f175ed7a980 -1 LDAP not started since no server URIs were provided in the configuration.
Oct  8 05:45:52 np0005475493 radosgw[88577]: v1 topic migration: finished v1 topic migration
Oct  8 05:45:52 np0005475493 radosgw[88577]: framework: beast
Oct  8 05:45:52 np0005475493 radosgw[88577]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct  8 05:45:52 np0005475493 radosgw[88577]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct  8 05:45:52 np0005475493 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct  8 05:45:52 np0005475493 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct  8 05:45:52 np0005475493 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct  8 05:45:52 np0005475493 radosgw[88577]: starting handler: beast
Oct  8 05:45:52 np0005475493 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Oct  8 05:45:52 np0005475493 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Oct  8 05:45:52 np0005475493 radosgw[88577]: set uid:gid to 167:167 (ceph:ceph)
Oct  8 05:45:52 np0005475493 radosgw[88577]: mgrc service_daemon_register rgw.14382 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.wdkdxi,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864104,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=246b4a69-3c1d-47ce-b182-d12a3d96d3e3,zone_name=default,zonegroup_id=3218c688-50d3-4b3d-9517-1c08371b4e2e,zonegroup_name=default}
Oct  8 05:45:52 np0005475493 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Oct  8 05:45:52 np0005475493 ceph-mgr[73869]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  8 05:45:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:52.752+0000 7f359c145140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  8 05:45:52 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'selftest'
Oct  8 05:45:52 np0005475493 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Oct  8 05:45:52 np0005475493 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Oct  8 05:45:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:52.833+0000 7f359c145140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  8 05:45:52 np0005475493 ceph-mgr[73869]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  8 05:45:52 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'snap_schedule'
Oct  8 05:45:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:52.930+0000 7f359c145140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  8 05:45:52 np0005475493 ceph-mgr[73869]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  8 05:45:52 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'stats'
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'status'
Oct  8 05:45:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:53.094+0000 7f359c145140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'telegraf'
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  8 05:45:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:53.164+0000 7f359c145140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'telemetry'
Oct  8 05:45:53 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Oct  8 05:45:53 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  8 05:45:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:53.315+0000 7f359c145140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'test_orchestrator'
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  8 05:45:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:53.539+0000 7f359c145140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'volumes'
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  8 05:45:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:53.818+0000 7f359c145140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'zabbix'
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.swlvov restarted
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.swlvov started
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  8 05:45:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:45:53.894+0000 7f359c145140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ixicfj restarted
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ixicfj
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: ms_deliver_dispatch: unhandled message 0x5565e9db7860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: mgr handle_mgr_map Activating!
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: mgr handle_mgr_map I am now activating
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.ixicfj(active, starting, since 0.046874s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ixicfj", "id": "compute-0.ixicfj"} v 0)
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ixicfj", "id": "compute-0.ixicfj"}]: dispatch
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.mtagwx", "id": "compute-2.mtagwx"} v 0)
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.mtagwx", "id": "compute-2.mtagwx"}]: dispatch
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.swlvov", "id": "compute-1.swlvov"} v 0)
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.swlvov", "id": "compute-1.swlvov"}]: dispatch
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e1 all = 1
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: balancer
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: [balancer INFO root] Starting
Oct  8 05:45:53 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Manager daemon compute-0.ixicfj is now available
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:45:53
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: cephadm
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: crash
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: dashboard
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: [dashboard INFO access_control] Loading user roles DB version=2
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: [dashboard INFO sso] Loading SSO DB version=1
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: devicehealth
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: iostat
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: [devicehealth INFO root] Starting
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: nfs
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: orchestrator
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: pg_autoscaler
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: progress
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: [progress INFO root] Loading...
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f351e4f4d90>, <progress.module.GhostEvent object at 0x7f351e505040>, <progress.module.GhostEvent object at 0x7f351e505070>, <progress.module.GhostEvent object at 0x7f351e5050a0>, <progress.module.GhostEvent object at 0x7f351e5050d0>, <progress.module.GhostEvent object at 0x7f351e505100>, <progress.module.GhostEvent object at 0x7f351e505130>, <progress.module.GhostEvent object at 0x7f351e505160>, <progress.module.GhostEvent object at 0x7f351e505190>, <progress.module.GhostEvent object at 0x7f351e5051c0>, <progress.module.GhostEvent object at 0x7f351e5051f0>, <progress.module.GhostEvent object at 0x7f351e505220>] historic events
Oct  8 05:45:53 np0005475493 ceph-mgr[73869]: [progress INFO root] Loaded OSDMap, ready.
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] recovery thread starting
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] starting setup
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: rbd_support
Oct  8 05:45:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"} v 0)
Oct  8 05:45:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"}]: dispatch
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: restful
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [restful INFO root] server_addr: :: server_port: 8003
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [restful WARNING root] server not running: no certificate configured
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: status
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: telemetry
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: volumes
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] PerfHandler: starting
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_task_task: vms, start_after=
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_task_task: volumes, start_after=
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_task_task: backups, start_after=
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_task_task: images, start_after=
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TaskHandler: starting
Oct  8 05:45:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"} v 0)
Oct  8 05:45:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"}]: dispatch
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] setup complete
Oct  8 05:45:54 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.mtagwx restarted
Oct  8 05:45:54 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.mtagwx started
Oct  8 05:45:54 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.c scrub starts
Oct  8 05:45:54 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.c scrub ok
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Oct  8 05:45:54 np0005475493 systemd-logind[798]: New session 35 of user ceph-admin.
Oct  8 05:45:54 np0005475493 systemd[1]: Started Session 35 of User ceph-admin.
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.module] Engine started.
Oct  8 05:45:54 np0005475493 ceph-mon[73572]: Active manager daemon compute-0.ixicfj restarted
Oct  8 05:45:54 np0005475493 ceph-mon[73572]: Activating manager daemon compute-0.ixicfj
Oct  8 05:45:54 np0005475493 ceph-mon[73572]: Manager daemon compute-0.ixicfj is now available
Oct  8 05:45:54 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"}]: dispatch
Oct  8 05:45:54 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"}]: dispatch
Oct  8 05:45:54 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.ixicfj(active, since 1.05685s), standbys: compute-1.swlvov, compute-2.mtagwx
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14394 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 05:45:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Oct  8 05:45:54 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v3: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:45:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:54 np0005475493 competent_babbage[89471]: Option GRAFANA_API_USERNAME updated
Oct  8 05:45:54 np0005475493 systemd[1]: libpod-b1564ce47afdf437a22dfc4e23b8735ffc224b8264db47d08fe8e7c95e7902b7.scope: Deactivated successfully.
Oct  8 05:45:54 np0005475493 podman[89438]: 2025-10-08 09:45:54.995359913 +0000 UTC m=+6.893055302 container died b1564ce47afdf437a22dfc4e23b8735ffc224b8264db47d08fe8e7c95e7902b7 (image=quay.io/ceph/ceph:v19, name=competent_babbage, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  8 05:45:55 np0005475493 systemd[1]: var-lib-containers-storage-overlay-5e8bd965d281b1de97d6090e780ed71c35986901d13a335527fc6961a2bdd0c7-merged.mount: Deactivated successfully.
Oct  8 05:45:55 np0005475493 podman[89438]: 2025-10-08 09:45:55.035485805 +0000 UTC m=+6.933181194 container remove b1564ce47afdf437a22dfc4e23b8735ffc224b8264db47d08fe8e7c95e7902b7 (image=quay.io/ceph/ceph:v19, name=competent_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:45:55 np0005475493 systemd[1]: libpod-conmon-b1564ce47afdf437a22dfc4e23b8735ffc224b8264db47d08fe8e7c95e7902b7.scope: Deactivated successfully.
Oct  8 05:45:55 np0005475493 podman[89880]: 2025-10-08 09:45:55.085944371 +0000 UTC m=+0.062545525 container exec 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:45:55 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.f scrub starts
Oct  8 05:45:55 np0005475493 podman[89880]: 2025-10-08 09:45:55.195346313 +0000 UTC m=+0.171947467 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:45:55 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.f scrub ok
Oct  8 05:45:55 np0005475493 python3[89935]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Oct  8 05:45:55 np0005475493 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:45:55] ENGINE Bus STARTING
Oct  8 05:45:55 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:45:55] ENGINE Bus STARTING
Oct  8 05:45:55 np0005475493 podman[89980]: 2025-10-08 09:45:55.424940444 +0000 UTC m=+0.037253996 container create ca8693167292375c3a2fd5b5e60343a14c437eb9889b5dc037f7c2fd697dde3b (image=quay.io/ceph/ceph:v19, name=exciting_spence, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  8 05:45:55 np0005475493 systemd[1]: Started libpod-conmon-ca8693167292375c3a2fd5b5e60343a14c437eb9889b5dc037f7c2fd697dde3b.scope.
Oct  8 05:45:55 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:55 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d0a83369f082e3143486cc091df260eca6bee5cc94729f4d5021d6df0ffb54a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:55 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d0a83369f082e3143486cc091df260eca6bee5cc94729f4d5021d6df0ffb54a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:55 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d0a83369f082e3143486cc091df260eca6bee5cc94729f4d5021d6df0ffb54a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:55 np0005475493 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:45:55] ENGINE Serving on http://192.168.122.100:8765
Oct  8 05:45:55 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:45:55] ENGINE Serving on http://192.168.122.100:8765
Oct  8 05:45:55 np0005475493 podman[89980]: 2025-10-08 09:45:55.408706389 +0000 UTC m=+0.021019961 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:55 np0005475493 podman[89980]: 2025-10-08 09:45:55.51514849 +0000 UTC m=+0.127462092 container init ca8693167292375c3a2fd5b5e60343a14c437eb9889b5dc037f7c2fd697dde3b (image=quay.io/ceph/ceph:v19, name=exciting_spence, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct  8 05:45:55 np0005475493 podman[89980]: 2025-10-08 09:45:55.52173017 +0000 UTC m=+0.134043742 container start ca8693167292375c3a2fd5b5e60343a14c437eb9889b5dc037f7c2fd697dde3b (image=quay.io/ceph/ceph:v19, name=exciting_spence, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:45:55 np0005475493 podman[89980]: 2025-10-08 09:45:55.52601334 +0000 UTC m=+0.138326902 container attach ca8693167292375c3a2fd5b5e60343a14c437eb9889b5dc037f7c2fd697dde3b (image=quay.io/ceph/ceph:v19, name=exciting_spence, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:45:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:45:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:45:55 np0005475493 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:45:55] ENGINE Serving on https://192.168.122.100:7150
Oct  8 05:45:55 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:45:55] ENGINE Serving on https://192.168.122.100:7150
Oct  8 05:45:55 np0005475493 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:45:55] ENGINE Bus STARTED
Oct  8 05:45:55 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:45:55] ENGINE Bus STARTED
Oct  8 05:45:55 np0005475493 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:45:55] ENGINE Client ('192.168.122.100', 52474) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  8 05:45:55 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:45:55] ENGINE Client ('192.168.122.100', 52474) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  8 05:45:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:55 np0005475493 podman[90103]: 2025-10-08 09:45:55.77128902 +0000 UTC m=+0.073032985 container exec 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:45:55 np0005475493 podman[90103]: 2025-10-08 09:45:55.779574282 +0000 UTC m=+0.081318247 container exec_died 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:45:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:45:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:45:55 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14418 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 05:45:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Oct  8 05:45:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:45:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:55 np0005475493 exciting_spence[90025]: Option GRAFANA_API_PASSWORD updated
Oct  8 05:45:55 np0005475493 systemd[1]: libpod-ca8693167292375c3a2fd5b5e60343a14c437eb9889b5dc037f7c2fd697dde3b.scope: Deactivated successfully.
Oct  8 05:45:55 np0005475493 podman[89980]: 2025-10-08 09:45:55.918910714 +0000 UTC m=+0.531224296 container died ca8693167292375c3a2fd5b5e60343a14c437eb9889b5dc037f7c2fd697dde3b (image=quay.io/ceph/ceph:v19, name=exciting_spence, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:45:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:45:55 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v4: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:45:55 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  8 05:45:55 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  8 05:45:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:56 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:56 np0005475493 ceph-mon[73572]: [08/Oct/2025:09:45:55] ENGINE Bus STARTING
Oct  8 05:45:56 np0005475493 ceph-mon[73572]: [08/Oct/2025:09:45:55] ENGINE Serving on http://192.168.122.100:8765
Oct  8 05:45:56 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:56 np0005475493 ceph-mon[73572]: [08/Oct/2025:09:45:55] ENGINE Serving on https://192.168.122.100:7150
Oct  8 05:45:56 np0005475493 ceph-mon[73572]: [08/Oct/2025:09:45:55] ENGINE Bus STARTED
Oct  8 05:45:56 np0005475493 ceph-mon[73572]: [08/Oct/2025:09:45:55] ENGINE Client ('192.168.122.100', 52474) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  8 05:45:56 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:56 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:56 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:56 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:56 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:56 np0005475493 systemd[1]: var-lib-containers-storage-overlay-0d0a83369f082e3143486cc091df260eca6bee5cc94729f4d5021d6df0ffb54a-merged.mount: Deactivated successfully.
Oct  8 05:45:56 np0005475493 podman[89980]: 2025-10-08 09:45:56.0599849 +0000 UTC m=+0.672298452 container remove ca8693167292375c3a2fd5b5e60343a14c437eb9889b5dc037f7c2fd697dde3b (image=quay.io/ceph/ceph:v19, name=exciting_spence, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  8 05:45:56 np0005475493 systemd[1]: libpod-conmon-ca8693167292375c3a2fd5b5e60343a14c437eb9889b5dc037f7c2fd697dde3b.scope: Deactivated successfully.
Oct  8 05:45:56 np0005475493 ceph-mgr[73869]: [devicehealth INFO root] Check health
Oct  8 05:45:56 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.b scrub starts
Oct  8 05:45:56 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.b scrub ok
Oct  8 05:45:56 np0005475493 python3[90256]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:56 np0005475493 podman[90259]: 2025-10-08 09:45:56.517685327 +0000 UTC m=+0.039207755 container create abc8cbf5c3539b5c3c2e4e89c8bfe178a08e05834fa8b12b7bf4275aaec9cfb6 (image=quay.io/ceph/ceph:v19, name=dreamy_bose, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  8 05:45:56 np0005475493 systemd[1]: Started libpod-conmon-abc8cbf5c3539b5c3c2e4e89c8bfe178a08e05834fa8b12b7bf4275aaec9cfb6.scope.
Oct  8 05:45:56 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:56 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11eaa462f4e9f4bc5bbabe48f78d1a1afbcfc46705f425b52670878ab16e4fc5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:56 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11eaa462f4e9f4bc5bbabe48f78d1a1afbcfc46705f425b52670878ab16e4fc5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:56 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11eaa462f4e9f4bc5bbabe48f78d1a1afbcfc46705f425b52670878ab16e4fc5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:56 np0005475493 podman[90259]: 2025-10-08 09:45:56.501155643 +0000 UTC m=+0.022678071 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:56 np0005475493 podman[90259]: 2025-10-08 09:45:56.597493087 +0000 UTC m=+0.119015505 container init abc8cbf5c3539b5c3c2e4e89c8bfe178a08e05834fa8b12b7bf4275aaec9cfb6 (image=quay.io/ceph/ceph:v19, name=dreamy_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  8 05:45:56 np0005475493 podman[90259]: 2025-10-08 09:45:56.604534831 +0000 UTC m=+0.126057229 container start abc8cbf5c3539b5c3c2e4e89c8bfe178a08e05834fa8b12b7bf4275aaec9cfb6 (image=quay.io/ceph/ceph:v19, name=dreamy_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  8 05:45:56 np0005475493 podman[90259]: 2025-10-08 09:45:56.609264795 +0000 UTC m=+0.130787203 container attach abc8cbf5c3539b5c3c2e4e89c8bfe178a08e05834fa8b12b7bf4275aaec9cfb6 (image=quay.io/ceph/ceph:v19, name=dreamy_bose, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:45:56 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:45:56 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:56 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:45:56 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:56 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct  8 05:45:56 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  8 05:45:56 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct  8 05:45:56 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.ixicfj(active, since 3s), standbys: compute-1.swlvov, compute-2.mtagwx
Oct  8 05:45:56 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14430 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 05:45:56 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Oct  8 05:45:56 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:56 np0005475493 dreamy_bose[90279]: Option ALERTMANAGER_API_HOST updated
Oct  8 05:45:56 np0005475493 systemd[1]: libpod-abc8cbf5c3539b5c3c2e4e89c8bfe178a08e05834fa8b12b7bf4275aaec9cfb6.scope: Deactivated successfully.
Oct  8 05:45:56 np0005475493 conmon[90279]: conmon abc8cbf5c3539b5c3c2e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-abc8cbf5c3539b5c3c2e4e89c8bfe178a08e05834fa8b12b7bf4275aaec9cfb6.scope/container/memory.events
Oct  8 05:45:56 np0005475493 podman[90259]: 2025-10-08 09:45:56.978553511 +0000 UTC m=+0.500075909 container died abc8cbf5c3539b5c3c2e4e89c8bfe178a08e05834fa8b12b7bf4275aaec9cfb6 (image=quay.io/ceph/ceph:v19, name=dreamy_bose, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:45:57 np0005475493 systemd[1]: var-lib-containers-storage-overlay-11eaa462f4e9f4bc5bbabe48f78d1a1afbcfc46705f425b52670878ab16e4fc5-merged.mount: Deactivated successfully.
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: Cluster is now healthy
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:57 np0005475493 podman[90259]: 2025-10-08 09:45:57.019326062 +0000 UTC m=+0.540848460 container remove abc8cbf5c3539b5c3c2e4e89c8bfe178a08e05834fa8b12b7bf4275aaec9cfb6 (image=quay.io/ceph/ceph:v19, name=dreamy_bose, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:45:57 np0005475493 systemd[1]: libpod-conmon-abc8cbf5c3539b5c3c2e4e89c8bfe178a08e05834fa8b12b7bf4275aaec9cfb6.scope: Deactivated successfully.
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:45:57 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct  8 05:45:57 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct  8 05:45:57 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct  8 05:45:57 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct  8 05:45:57 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct  8 05:45:57 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct  8 05:45:57 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.b scrub starts
Oct  8 05:45:57 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.b scrub ok
Oct  8 05:45:57 np0005475493 python3[90419]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:57 np0005475493 podman[90491]: 2025-10-08 09:45:57.457225896 +0000 UTC m=+0.054299585 container create bafedd0cfa298ad16ae4600d695e19395206614c4dd1b98f4e624fd8a8e325dd (image=quay.io/ceph/ceph:v19, name=serene_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:45:57 np0005475493 systemd[1]: Started libpod-conmon-bafedd0cfa298ad16ae4600d695e19395206614c4dd1b98f4e624fd8a8e325dd.scope.
Oct  8 05:45:57 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:57 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20096b8cc0cdcfd83de6ae8f2ef4bd6147a85c4ae22cd53442a955faf79aef42/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:57 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20096b8cc0cdcfd83de6ae8f2ef4bd6147a85c4ae22cd53442a955faf79aef42/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:57 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20096b8cc0cdcfd83de6ae8f2ef4bd6147a85c4ae22cd53442a955faf79aef42/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:57 np0005475493 podman[90491]: 2025-10-08 09:45:57.528767304 +0000 UTC m=+0.125841003 container init bafedd0cfa298ad16ae4600d695e19395206614c4dd1b98f4e624fd8a8e325dd (image=quay.io/ceph/ceph:v19, name=serene_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:45:57 np0005475493 podman[90491]: 2025-10-08 09:45:57.44227634 +0000 UTC m=+0.039350059 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:57 np0005475493 podman[90491]: 2025-10-08 09:45:57.539665165 +0000 UTC m=+0.136738854 container start bafedd0cfa298ad16ae4600d695e19395206614c4dd1b98f4e624fd8a8e325dd (image=quay.io/ceph/ceph:v19, name=serene_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  8 05:45:57 np0005475493 podman[90491]: 2025-10-08 09:45:57.542881214 +0000 UTC m=+0.139954903 container attach bafedd0cfa298ad16ae4600d695e19395206614c4dd1b98f4e624fd8a8e325dd (image=quay.io/ceph/ceph:v19, name=serene_dhawan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:45:57 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:45:57 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:45:57 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:45:57 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:45:57 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:45:57 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:45:57 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14436 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Oct  8 05:45:57 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:57 np0005475493 serene_dhawan[90535]: Option PROMETHEUS_API_HOST updated
Oct  8 05:45:57 np0005475493 systemd[1]: libpod-bafedd0cfa298ad16ae4600d695e19395206614c4dd1b98f4e624fd8a8e325dd.scope: Deactivated successfully.
Oct  8 05:45:57 np0005475493 podman[90491]: 2025-10-08 09:45:57.928916428 +0000 UTC m=+0.525990147 container died bafedd0cfa298ad16ae4600d695e19395206614c4dd1b98f4e624fd8a8e325dd (image=quay.io/ceph/ceph:v19, name=serene_dhawan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  8 05:45:57 np0005475493 systemd[1]: var-lib-containers-storage-overlay-20096b8cc0cdcfd83de6ae8f2ef4bd6147a85c4ae22cd53442a955faf79aef42-merged.mount: Deactivated successfully.
Oct  8 05:45:57 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v5: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:45:57 np0005475493 podman[90491]: 2025-10-08 09:45:57.967129521 +0000 UTC m=+0.564203200 container remove bafedd0cfa298ad16ae4600d695e19395206614c4dd1b98f4e624fd8a8e325dd (image=quay.io/ceph/ceph:v19, name=serene_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  8 05:45:57 np0005475493 systemd[1]: libpod-conmon-bafedd0cfa298ad16ae4600d695e19395206614c4dd1b98f4e624fd8a8e325dd.scope: Deactivated successfully.
Oct  8 05:45:58 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:58 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:58 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  8 05:45:58 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:58 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:58 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  8 05:45:58 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:45:58 np0005475493 ceph-mon[73572]: Updating compute-0:/etc/ceph/ceph.conf
Oct  8 05:45:58 np0005475493 ceph-mon[73572]: Updating compute-1:/etc/ceph/ceph.conf
Oct  8 05:45:58 np0005475493 ceph-mon[73572]: Updating compute-2:/etc/ceph/ceph.conf
Oct  8 05:45:58 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:58 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.a scrub starts
Oct  8 05:45:58 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.a scrub ok
Oct  8 05:45:58 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.ixicfj(active, since 4s), standbys: compute-1.swlvov, compute-2.mtagwx
Oct  8 05:45:58 np0005475493 python3[90869]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:58 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:45:58 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:45:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:45:58 np0005475493 podman[90919]: 2025-10-08 09:45:58.356173418 +0000 UTC m=+0.049634503 container create fed70f8a450b8eef68677930c9ad23f5c827810cd1c2d0b165bb6fc1552b1246 (image=quay.io/ceph/ceph:v19, name=distracted_austin, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  8 05:45:58 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:45:58 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:45:58 np0005475493 systemd[1]: Started libpod-conmon-fed70f8a450b8eef68677930c9ad23f5c827810cd1c2d0b165bb6fc1552b1246.scope.
Oct  8 05:45:58 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:58 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c2832265005c4a161d9257384ad940178be9d4131f14f5c07a02f8a9aaf2b5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:58 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c2832265005c4a161d9257384ad940178be9d4131f14f5c07a02f8a9aaf2b5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:58 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c2832265005c4a161d9257384ad940178be9d4131f14f5c07a02f8a9aaf2b5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:58 np0005475493 podman[90919]: 2025-10-08 09:45:58.337457498 +0000 UTC m=+0.030918593 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:58 np0005475493 podman[90919]: 2025-10-08 09:45:58.439571538 +0000 UTC m=+0.133032623 container init fed70f8a450b8eef68677930c9ad23f5c827810cd1c2d0b165bb6fc1552b1246 (image=quay.io/ceph/ceph:v19, name=distracted_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  8 05:45:58 np0005475493 podman[90919]: 2025-10-08 09:45:58.446580591 +0000 UTC m=+0.140041676 container start fed70f8a450b8eef68677930c9ad23f5c827810cd1c2d0b165bb6fc1552b1246 (image=quay.io/ceph/ceph:v19, name=distracted_austin, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  8 05:45:58 np0005475493 podman[90919]: 2025-10-08 09:45:58.450625714 +0000 UTC m=+0.144086799 container attach fed70f8a450b8eef68677930c9ad23f5c827810cd1c2d0b165bb6fc1552b1246 (image=quay.io/ceph/ceph:v19, name=distracted_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:45:58 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:45:58 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:45:58 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14442 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 05:45:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Oct  8 05:45:58 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:58 np0005475493 distracted_austin[90977]: Option GRAFANA_API_URL updated
Oct  8 05:45:58 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:45:58 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:45:58 np0005475493 systemd[1]: libpod-fed70f8a450b8eef68677930c9ad23f5c827810cd1c2d0b165bb6fc1552b1246.scope: Deactivated successfully.
Oct  8 05:45:58 np0005475493 podman[90919]: 2025-10-08 09:45:58.834529774 +0000 UTC m=+0.527990829 container died fed70f8a450b8eef68677930c9ad23f5c827810cd1c2d0b165bb6fc1552b1246 (image=quay.io/ceph/ceph:v19, name=distracted_austin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct  8 05:45:58 np0005475493 systemd[1]: var-lib-containers-storage-overlay-b0c2832265005c4a161d9257384ad940178be9d4131f14f5c07a02f8a9aaf2b5-merged.mount: Deactivated successfully.
Oct  8 05:45:58 np0005475493 podman[90919]: 2025-10-08 09:45:58.871447937 +0000 UTC m=+0.564909002 container remove fed70f8a450b8eef68677930c9ad23f5c827810cd1c2d0b165bb6fc1552b1246 (image=quay.io/ceph/ceph:v19, name=distracted_austin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:45:58 np0005475493 systemd[1]: libpod-conmon-fed70f8a450b8eef68677930c9ad23f5c827810cd1c2d0b165bb6fc1552b1246.scope: Deactivated successfully.
Oct  8 05:45:58 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:45:58 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:45:59 np0005475493 ceph-mon[73572]: Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:45:59 np0005475493 ceph-mon[73572]: Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:45:59 np0005475493 ceph-mon[73572]: Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:45:59 np0005475493 ceph-mon[73572]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:45:59 np0005475493 ceph-mon[73572]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:45:59 np0005475493 ceph-mon[73572]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:45:59 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:59 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:45:59 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:45:59 np0005475493 python3[91288]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:45:59 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.9 deep-scrub starts
Oct  8 05:45:59 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.9 deep-scrub ok
Oct  8 05:45:59 np0005475493 podman[91344]: 2025-10-08 09:45:59.25315136 +0000 UTC m=+0.051151128 container create 0da5111c8beba0fdd960690ab036dfc53c9838226722a614e8de0bec5d71d479 (image=quay.io/ceph/ceph:v19, name=boring_dewdney, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:45:59 np0005475493 systemd[1]: Started libpod-conmon-0da5111c8beba0fdd960690ab036dfc53c9838226722a614e8de0bec5d71d479.scope.
Oct  8 05:45:59 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:45:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7ce874085f0958d14ae936aee77d1004dc53418b22b672d38f40d8e40239d7d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7ce874085f0958d14ae936aee77d1004dc53418b22b672d38f40d8e40239d7d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7ce874085f0958d14ae936aee77d1004dc53418b22b672d38f40d8e40239d7d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:45:59 np0005475493 podman[91344]: 2025-10-08 09:45:59.230202071 +0000 UTC m=+0.028201849 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:45:59 np0005475493 podman[91344]: 2025-10-08 09:45:59.331901259 +0000 UTC m=+0.129901057 container init 0da5111c8beba0fdd960690ab036dfc53c9838226722a614e8de0bec5d71d479 (image=quay.io/ceph/ceph:v19, name=boring_dewdney, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:45:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:45:59 np0005475493 podman[91344]: 2025-10-08 09:45:59.342418498 +0000 UTC m=+0.140418286 container start 0da5111c8beba0fdd960690ab036dfc53c9838226722a614e8de0bec5d71d479 (image=quay.io/ceph/ceph:v19, name=boring_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:45:59 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:59 np0005475493 podman[91344]: 2025-10-08 09:45:59.345784641 +0000 UTC m=+0.143784429 container attach 0da5111c8beba0fdd960690ab036dfc53c9838226722a614e8de0bec5d71d479 (image=quay.io/ceph/ceph:v19, name=boring_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  8 05:45:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:45:59 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:45:59 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:45:59 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:45:59 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:45:59 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 05:45:59 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:45:59 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev f9bd10ca-c3c1-4645-b329-5c0fc669d3eb (Updating node-exporter deployment (+2 -> 3))
Oct  8 05:45:59 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Oct  8 05:45:59 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Oct  8 05:45:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Oct  8 05:45:59 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1343562250' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct  8 05:45:59 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v6: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 14 op/s
Oct  8 05:46:00 np0005475493 ceph-mon[73572]: Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:46:00 np0005475493 ceph-mon[73572]: Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:46:00 np0005475493 ceph-mon[73572]: Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:46:00 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:00 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:00 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:00 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:00 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:00 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:00 np0005475493 ceph-mon[73572]: from='mgr.14388 192.168.122.100:0/2483722133' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:00 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/1343562250' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct  8 05:46:00 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.b scrub starts
Oct  8 05:46:00 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.b scrub ok
Oct  8 05:46:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1343562250' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct  8 05:46:00 np0005475493 ceph-mgr[73869]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  8 05:46:00 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.ixicfj(active, since 6s), standbys: compute-1.swlvov, compute-2.mtagwx
Oct  8 05:46:00 np0005475493 systemd[1]: libpod-0da5111c8beba0fdd960690ab036dfc53c9838226722a614e8de0bec5d71d479.scope: Deactivated successfully.
Oct  8 05:46:00 np0005475493 podman[91344]: 2025-10-08 09:46:00.761958362 +0000 UTC m=+1.559958170 container died 0da5111c8beba0fdd960690ab036dfc53c9838226722a614e8de0bec5d71d479 (image=quay.io/ceph/ceph:v19, name=boring_dewdney, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid)
Oct  8 05:46:00 np0005475493 systemd[1]: var-lib-containers-storage-overlay-d7ce874085f0958d14ae936aee77d1004dc53418b22b672d38f40d8e40239d7d-merged.mount: Deactivated successfully.
Oct  8 05:46:00 np0005475493 podman[91344]: 2025-10-08 09:46:00.803483507 +0000 UTC m=+1.601483295 container remove 0da5111c8beba0fdd960690ab036dfc53c9838226722a614e8de0bec5d71d479 (image=quay.io/ceph/ceph:v19, name=boring_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Oct  8 05:46:00 np0005475493 systemd-logind[798]: Session 35 logged out. Waiting for processes to exit.
Oct  8 05:46:00 np0005475493 systemd[1]: libpod-conmon-0da5111c8beba0fdd960690ab036dfc53c9838226722a614e8de0bec5d71d479.scope: Deactivated successfully.
Oct  8 05:46:00 np0005475493 systemd[1]: session-35.scope: Deactivated successfully.
Oct  8 05:46:00 np0005475493 systemd[1]: session-35.scope: Consumed 4.805s CPU time.
Oct  8 05:46:00 np0005475493 systemd-logind[798]: Removed session 35.
Oct  8 05:46:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ignoring --setuser ceph since I am not root
Oct  8 05:46:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ignoring --setgroup ceph since I am not root
Oct  8 05:46:00 np0005475493 ceph-mgr[73869]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  8 05:46:00 np0005475493 ceph-mgr[73869]: pidfile_write: ignore empty --pid-file
Oct  8 05:46:00 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'alerts'
Oct  8 05:46:00 np0005475493 ceph-mgr[73869]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  8 05:46:00 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'balancer'
Oct  8 05:46:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:00.978+0000 7f67ee4c5140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  8 05:46:01 np0005475493 ceph-mgr[73869]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  8 05:46:01 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'cephadm'
Oct  8 05:46:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:01.060+0000 7f67ee4c5140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  8 05:46:01 np0005475493 ceph-mon[73572]: Deploying daemon node-exporter.compute-1 on compute-1
Oct  8 05:46:01 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/1343562250' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct  8 05:46:01 np0005475493 python3[91566]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:46:01 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Oct  8 05:46:01 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Oct  8 05:46:01 np0005475493 podman[91567]: 2025-10-08 09:46:01.226961652 +0000 UTC m=+0.083620138 container create b7f7f2259363362eb7ab3dde338cc1cfa7ff1bcbbaf589551c76ea1b3c0226a5 (image=quay.io/ceph/ceph:v19, name=dreamy_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:46:01 np0005475493 systemd[1]: Started libpod-conmon-b7f7f2259363362eb7ab3dde338cc1cfa7ff1bcbbaf589551c76ea1b3c0226a5.scope.
Oct  8 05:46:01 np0005475493 podman[91567]: 2025-10-08 09:46:01.180729214 +0000 UTC m=+0.037387760 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:46:01 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:46:01 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cdf2017d01e363e54c73eb4446356611017dc9c6b2826887b374758d5aab149/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:01 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cdf2017d01e363e54c73eb4446356611017dc9c6b2826887b374758d5aab149/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:01 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cdf2017d01e363e54c73eb4446356611017dc9c6b2826887b374758d5aab149/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:01 np0005475493 podman[91567]: 2025-10-08 09:46:01.297455178 +0000 UTC m=+0.154113664 container init b7f7f2259363362eb7ab3dde338cc1cfa7ff1bcbbaf589551c76ea1b3c0226a5 (image=quay.io/ceph/ceph:v19, name=dreamy_galois, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  8 05:46:01 np0005475493 podman[91567]: 2025-10-08 09:46:01.304260215 +0000 UTC m=+0.160918681 container start b7f7f2259363362eb7ab3dde338cc1cfa7ff1bcbbaf589551c76ea1b3c0226a5 (image=quay.io/ceph/ceph:v19, name=dreamy_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  8 05:46:01 np0005475493 podman[91567]: 2025-10-08 09:46:01.31163322 +0000 UTC m=+0.168291686 container attach b7f7f2259363362eb7ab3dde338cc1cfa7ff1bcbbaf589551c76ea1b3c0226a5 (image=quay.io/ceph/ceph:v19, name=dreamy_galois, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  8 05:46:01 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Oct  8 05:46:01 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3794820163' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct  8 05:46:01 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'crash'
Oct  8 05:46:01 np0005475493 ceph-mgr[73869]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  8 05:46:01 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'dashboard'
Oct  8 05:46:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:01.861+0000 7f67ee4c5140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  8 05:46:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3794820163' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct  8 05:46:02 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.ixicfj(active, since 8s), standbys: compute-1.swlvov, compute-2.mtagwx
Oct  8 05:46:02 np0005475493 systemd[1]: libpod-b7f7f2259363362eb7ab3dde338cc1cfa7ff1bcbbaf589551c76ea1b3c0226a5.scope: Deactivated successfully.
Oct  8 05:46:02 np0005475493 podman[91567]: 2025-10-08 09:46:02.115883099 +0000 UTC m=+0.972541555 container died b7f7f2259363362eb7ab3dde338cc1cfa7ff1bcbbaf589551c76ea1b3c0226a5 (image=quay.io/ceph/ceph:v19, name=dreamy_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:46:02 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/3794820163' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct  8 05:46:02 np0005475493 systemd[1]: var-lib-containers-storage-overlay-3cdf2017d01e363e54c73eb4446356611017dc9c6b2826887b374758d5aab149-merged.mount: Deactivated successfully.
Oct  8 05:46:02 np0005475493 podman[91567]: 2025-10-08 09:46:02.159573119 +0000 UTC m=+1.016231575 container remove b7f7f2259363362eb7ab3dde338cc1cfa7ff1bcbbaf589551c76ea1b3c0226a5 (image=quay.io/ceph/ceph:v19, name=dreamy_galois, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:46:02 np0005475493 systemd[1]: libpod-conmon-b7f7f2259363362eb7ab3dde338cc1cfa7ff1bcbbaf589551c76ea1b3c0226a5.scope: Deactivated successfully.
Oct  8 05:46:02 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Oct  8 05:46:02 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Oct  8 05:46:02 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'devicehealth'
Oct  8 05:46:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:02.567+0000 7f67ee4c5140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  8 05:46:02 np0005475493 ceph-mgr[73869]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  8 05:46:02 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'diskprediction_local'
Oct  8 05:46:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  8 05:46:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  8 05:46:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:  from numpy import show_config as show_numpy_config
Oct  8 05:46:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:02.731+0000 7f67ee4c5140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  8 05:46:02 np0005475493 ceph-mgr[73869]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  8 05:46:02 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'influx'
Oct  8 05:46:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:02.798+0000 7f67ee4c5140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  8 05:46:02 np0005475493 ceph-mgr[73869]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  8 05:46:02 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'insights'
Oct  8 05:46:02 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'iostat'
Oct  8 05:46:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:02.933+0000 7f67ee4c5140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  8 05:46:02 np0005475493 ceph-mgr[73869]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  8 05:46:02 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'k8sevents'
Oct  8 05:46:03 np0005475493 python3[91707]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  8 05:46:03 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/3794820163' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct  8 05:46:03 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Oct  8 05:46:03 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Oct  8 05:46:03 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'localpool'
Oct  8 05:46:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:46:03 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'mds_autoscaler'
Oct  8 05:46:03 np0005475493 python3[91778]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759916762.8493073-33846-183084686373290/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:46:03 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'mirroring'
Oct  8 05:46:03 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'nfs'
Oct  8 05:46:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:03.918+0000 7f67ee4c5140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  8 05:46:03 np0005475493 ceph-mgr[73869]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  8 05:46:03 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'orchestrator'
Oct  8 05:46:03 np0005475493 python3[91828]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:46:04 np0005475493 podman[91829]: 2025-10-08 09:46:04.034960724 +0000 UTC m=+0.058028598 container create dabe41657d799e8f67b8f9a54c7c383817634f645f27fe32a289f73e6e9a1bca (image=quay.io/ceph/ceph:v19, name=exciting_kirch, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:46:04 np0005475493 systemd[1]: Started libpod-conmon-dabe41657d799e8f67b8f9a54c7c383817634f645f27fe32a289f73e6e9a1bca.scope.
Oct  8 05:46:04 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:46:04 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97a33d61aba89fc689df1b2cd2819fd6083f25731ce33ae8ce6a26a47e7a3d4a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:04 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97a33d61aba89fc689df1b2cd2819fd6083f25731ce33ae8ce6a26a47e7a3d4a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:04 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97a33d61aba89fc689df1b2cd2819fd6083f25731ce33ae8ce6a26a47e7a3d4a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:04 np0005475493 podman[91829]: 2025-10-08 09:46:04.01611295 +0000 UTC m=+0.039180804 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:46:04 np0005475493 podman[91829]: 2025-10-08 09:46:04.112328149 +0000 UTC m=+0.135396033 container init dabe41657d799e8f67b8f9a54c7c383817634f645f27fe32a289f73e6e9a1bca (image=quay.io/ceph/ceph:v19, name=exciting_kirch, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:46:04 np0005475493 podman[91829]: 2025-10-08 09:46:04.118473237 +0000 UTC m=+0.141541071 container start dabe41657d799e8f67b8f9a54c7c383817634f645f27fe32a289f73e6e9a1bca (image=quay.io/ceph/ceph:v19, name=exciting_kirch, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:46:04 np0005475493 podman[91829]: 2025-10-08 09:46:04.1215219 +0000 UTC m=+0.144589784 container attach dabe41657d799e8f67b8f9a54c7c383817634f645f27fe32a289f73e6e9a1bca (image=quay.io/ceph/ceph:v19, name=exciting_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  8 05:46:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:04.141+0000 7f67ee4c5140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  8 05:46:04 np0005475493 ceph-mgr[73869]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  8 05:46:04 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'osd_perf_query'
Oct  8 05:46:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:04.213+0000 7f67ee4c5140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  8 05:46:04 np0005475493 ceph-mgr[73869]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  8 05:46:04 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'osd_support'
Oct  8 05:46:04 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Oct  8 05:46:04 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Oct  8 05:46:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:04.277+0000 7f67ee4c5140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  8 05:46:04 np0005475493 ceph-mgr[73869]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  8 05:46:04 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'pg_autoscaler'
Oct  8 05:46:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:04.352+0000 7f67ee4c5140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  8 05:46:04 np0005475493 ceph-mgr[73869]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  8 05:46:04 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'progress'
Oct  8 05:46:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:04.423+0000 7f67ee4c5140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  8 05:46:04 np0005475493 ceph-mgr[73869]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  8 05:46:04 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'prometheus'
Oct  8 05:46:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:04.755+0000 7f67ee4c5140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  8 05:46:04 np0005475493 ceph-mgr[73869]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  8 05:46:04 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'rbd_support'
Oct  8 05:46:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:04.851+0000 7f67ee4c5140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  8 05:46:04 np0005475493 ceph-mgr[73869]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  8 05:46:04 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'restful'
Oct  8 05:46:05 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'rgw'
Oct  8 05:46:05 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.12 deep-scrub starts
Oct  8 05:46:05 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.12 deep-scrub ok
Oct  8 05:46:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:05.290+0000 7f67ee4c5140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  8 05:46:05 np0005475493 ceph-mgr[73869]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  8 05:46:05 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'rook'
Oct  8 05:46:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:05.850+0000 7f67ee4c5140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  8 05:46:05 np0005475493 ceph-mgr[73869]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  8 05:46:05 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'selftest'
Oct  8 05:46:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:05.922+0000 7f67ee4c5140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  8 05:46:05 np0005475493 ceph-mgr[73869]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  8 05:46:05 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'snap_schedule'
Oct  8 05:46:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:06.006+0000 7f67ee4c5140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'stats'
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'status'
Oct  8 05:46:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:06.156+0000 7f67ee4c5140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'telegraf'
Oct  8 05:46:06 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Oct  8 05:46:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:06.226+0000 7f67ee4c5140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'telemetry'
Oct  8 05:46:06 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Oct  8 05:46:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:06.394+0000 7f67ee4c5140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'test_orchestrator'
Oct  8 05:46:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:06.615+0000 7f67ee4c5140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'volumes'
Oct  8 05:46:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:06.882+0000 7f67ee4c5140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'zabbix'
Oct  8 05:46:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:06.951+0000 7f67ee4c5140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  8 05:46:06 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ixicfj restarted
Oct  8 05:46:06 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Oct  8 05:46:06 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ixicfj
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: ms_deliver_dispatch: unhandled message 0x55617f189860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr respawn  1: '-n'
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr respawn  2: 'mgr.compute-0.ixicfj'
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr respawn  3: '-f'
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr respawn  4: '--setuser'
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr respawn  5: 'ceph'
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr respawn  6: '--setgroup'
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr respawn  7: 'ceph'
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr respawn  8: '--default-log-to-file=false'
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr respawn  9: '--default-log-to-journald=true'
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr respawn  10: '--default-log-to-stderr=false'
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct  8 05:46:06 np0005475493 ceph-mgr[73869]: mgr respawn  exe_path /proc/self/exe
Oct  8 05:46:06 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Oct  8 05:46:06 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Oct  8 05:46:06 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.ixicfj(active, starting, since 0.0347759s), standbys: compute-1.swlvov, compute-2.mtagwx
Oct  8 05:46:07 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.swlvov restarted
Oct  8 05:46:07 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.swlvov started
Oct  8 05:46:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ignoring --setuser ceph since I am not root
Oct  8 05:46:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ignoring --setgroup ceph since I am not root
Oct  8 05:46:07 np0005475493 ceph-mgr[73869]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  8 05:46:07 np0005475493 ceph-mgr[73869]: pidfile_write: ignore empty --pid-file
Oct  8 05:46:07 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'alerts'
Oct  8 05:46:07 np0005475493 ceph-mgr[73869]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  8 05:46:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:07.170+0000 7f1a88fc9140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  8 05:46:07 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'balancer'
Oct  8 05:46:07 np0005475493 ceph-mon[73572]: Active manager daemon compute-0.ixicfj restarted
Oct  8 05:46:07 np0005475493 ceph-mon[73572]: Activating manager daemon compute-0.ixicfj
Oct  8 05:46:07 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Oct  8 05:46:07 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Oct  8 05:46:07 np0005475493 ceph-mgr[73869]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  8 05:46:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:07.279+0000 7f1a88fc9140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  8 05:46:07 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'cephadm'
Oct  8 05:46:07 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.mtagwx restarted
Oct  8 05:46:07 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.mtagwx started
Oct  8 05:46:07 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.ixicfj(active, starting, since 1.04309s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct  8 05:46:08 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'crash'
Oct  8 05:46:08 np0005475493 ceph-mgr[73869]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  8 05:46:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:08.106+0000 7f1a88fc9140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  8 05:46:08 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'dashboard'
Oct  8 05:46:08 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Oct  8 05:46:08 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Oct  8 05:46:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:46:08 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'devicehealth'
Oct  8 05:46:08 np0005475493 ceph-mgr[73869]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  8 05:46:08 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'diskprediction_local'
Oct  8 05:46:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:08.718+0000 7f1a88fc9140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  8 05:46:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  8 05:46:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  8 05:46:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:  from numpy import show_config as show_numpy_config
Oct  8 05:46:08 np0005475493 ceph-mgr[73869]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  8 05:46:08 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'influx'
Oct  8 05:46:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:08.877+0000 7f1a88fc9140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  8 05:46:08 np0005475493 ceph-mgr[73869]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  8 05:46:08 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'insights'
Oct  8 05:46:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:08.949+0000 7f1a88fc9140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  8 05:46:09 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'iostat'
Oct  8 05:46:09 np0005475493 ceph-mgr[73869]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  8 05:46:09 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'k8sevents'
Oct  8 05:46:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:09.085+0000 7f1a88fc9140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  8 05:46:09 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.12 deep-scrub starts
Oct  8 05:46:09 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.12 deep-scrub ok
Oct  8 05:46:09 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'localpool'
Oct  8 05:46:09 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'mds_autoscaler'
Oct  8 05:46:09 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'mirroring'
Oct  8 05:46:09 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'nfs'
Oct  8 05:46:10 np0005475493 ceph-mgr[73869]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  8 05:46:10 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'orchestrator'
Oct  8 05:46:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:10.091+0000 7f1a88fc9140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  8 05:46:10 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Oct  8 05:46:10 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Oct  8 05:46:10 np0005475493 ceph-mgr[73869]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  8 05:46:10 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'osd_perf_query'
Oct  8 05:46:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:10.326+0000 7f1a88fc9140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  8 05:46:10 np0005475493 ceph-mgr[73869]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  8 05:46:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:10.397+0000 7f1a88fc9140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  8 05:46:10 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'osd_support'
Oct  8 05:46:10 np0005475493 ceph-mgr[73869]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  8 05:46:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:10.460+0000 7f1a88fc9140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  8 05:46:10 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'pg_autoscaler'
Oct  8 05:46:10 np0005475493 ceph-mgr[73869]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  8 05:46:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:10.539+0000 7f1a88fc9140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  8 05:46:10 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'progress'
Oct  8 05:46:10 np0005475493 ceph-mgr[73869]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  8 05:46:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:10.618+0000 7f1a88fc9140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  8 05:46:10 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'prometheus'
Oct  8 05:46:10 np0005475493 ceph-mgr[73869]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  8 05:46:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:10.970+0000 7f1a88fc9140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  8 05:46:10 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'rbd_support'
Oct  8 05:46:11 np0005475493 systemd[1]: Stopping User Manager for UID 42477...
Oct  8 05:46:11 np0005475493 systemd[74898]: Activating special unit Exit the Session...
Oct  8 05:46:11 np0005475493 systemd[74898]: Stopped target Main User Target.
Oct  8 05:46:11 np0005475493 systemd[74898]: Stopped target Basic System.
Oct  8 05:46:11 np0005475493 systemd[74898]: Stopped target Paths.
Oct  8 05:46:11 np0005475493 systemd[74898]: Stopped target Sockets.
Oct  8 05:46:11 np0005475493 systemd[74898]: Stopped target Timers.
Oct  8 05:46:11 np0005475493 systemd[74898]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  8 05:46:11 np0005475493 systemd[74898]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  8 05:46:11 np0005475493 systemd[74898]: Closed D-Bus User Message Bus Socket.
Oct  8 05:46:11 np0005475493 systemd[74898]: Stopped Create User's Volatile Files and Directories.
Oct  8 05:46:11 np0005475493 systemd[74898]: Removed slice User Application Slice.
Oct  8 05:46:11 np0005475493 systemd[74898]: Reached target Shutdown.
Oct  8 05:46:11 np0005475493 systemd[74898]: Finished Exit the Session.
Oct  8 05:46:11 np0005475493 systemd[74898]: Reached target Exit the Session.
Oct  8 05:46:11 np0005475493 systemd[1]: user@42477.service: Deactivated successfully.
Oct  8 05:46:11 np0005475493 systemd[1]: Stopped User Manager for UID 42477.
Oct  8 05:46:11 np0005475493 ceph-mgr[73869]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  8 05:46:11 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'restful'
Oct  8 05:46:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:11.083+0000 7f1a88fc9140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  8 05:46:11 np0005475493 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct  8 05:46:11 np0005475493 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct  8 05:46:11 np0005475493 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct  8 05:46:11 np0005475493 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct  8 05:46:11 np0005475493 systemd[1]: Removed slice User Slice of UID 42477.
Oct  8 05:46:11 np0005475493 systemd[1]: user-42477.slice: Consumed 32.471s CPU time.
Oct  8 05:46:11 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Oct  8 05:46:11 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Oct  8 05:46:11 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'rgw'
Oct  8 05:46:11 np0005475493 ceph-mgr[73869]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  8 05:46:11 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'rook'
Oct  8 05:46:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:11.502+0000 7f1a88fc9140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  8 05:46:12 np0005475493 ceph-mgr[73869]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  8 05:46:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:12.056+0000 7f1a88fc9140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  8 05:46:12 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'selftest'
Oct  8 05:46:12 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Oct  8 05:46:12 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Oct  8 05:46:12 np0005475493 ceph-mgr[73869]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  8 05:46:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:12.127+0000 7f1a88fc9140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  8 05:46:12 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'snap_schedule'
Oct  8 05:46:12 np0005475493 ceph-mgr[73869]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  8 05:46:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:12.206+0000 7f1a88fc9140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  8 05:46:12 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'stats'
Oct  8 05:46:12 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'status'
Oct  8 05:46:12 np0005475493 ceph-mgr[73869]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  8 05:46:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:12.356+0000 7f1a88fc9140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  8 05:46:12 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'telegraf'
Oct  8 05:46:12 np0005475493 ceph-mgr[73869]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  8 05:46:12 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'telemetry'
Oct  8 05:46:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:12.424+0000 7f1a88fc9140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  8 05:46:12 np0005475493 ceph-mgr[73869]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  8 05:46:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:12.581+0000 7f1a88fc9140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  8 05:46:12 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'test_orchestrator'
Oct  8 05:46:12 np0005475493 ceph-mgr[73869]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  8 05:46:12 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'volumes'
Oct  8 05:46:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:12.794+0000 7f1a88fc9140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'zabbix'
Oct  8 05:46:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:13.048+0000 7f1a88fc9140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  8 05:46:13 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Oct  8 05:46:13 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  8 05:46:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:46:13.122+0000 7f1a88fc9140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ixicfj restarted
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ixicfj
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: ms_deliver_dispatch: unhandled message 0x562632431860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.ixicfj(active, starting, since 0.0305495s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: mgr handle_mgr_map Activating!
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: mgr handle_mgr_map I am now activating
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ixicfj", "id": "compute-0.ixicfj"} v 0)
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ixicfj", "id": "compute-0.ixicfj"}]: dispatch
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.mtagwx", "id": "compute-2.mtagwx"} v 0)
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.mtagwx", "id": "compute-2.mtagwx"}]: dispatch
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.swlvov", "id": "compute-1.swlvov"} v 0)
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.swlvov", "id": "compute-1.swlvov"}]: dispatch
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e1 all = 1
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: balancer
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [balancer INFO root] Starting
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Manager daemon compute-0.ixicfj is now available
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:46:13
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: cephadm
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: crash
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: dashboard
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO access_control] Loading user roles DB version=2
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO sso] Loading SSO DB version=1
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: devicehealth
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: iostat
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: nfs
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: orchestrator
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [devicehealth INFO root] Starting
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: pg_autoscaler
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: progress
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [progress INFO root] Loading...
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f1a0eb65eb0>, <progress.module.GhostEvent object at 0x7f1a0eb884f0>, <progress.module.GhostEvent object at 0x7f1a0eb88700>, <progress.module.GhostEvent object at 0x7f1a0eb884c0>, <progress.module.GhostEvent object at 0x7f1a0eb886d0>, <progress.module.GhostEvent object at 0x7f1a18431be0>, <progress.module.GhostEvent object at 0x7f1a133b7a00>, <progress.module.GhostEvent object at 0x7f1a0eb940d0>, <progress.module.GhostEvent object at 0x7f1a0eb94a60>, <progress.module.GhostEvent object at 0x7f1a0eb94a90>, <progress.module.GhostEvent object at 0x7f1a0eb94ac0>, <progress.module.GhostEvent object at 0x7f1a0eb94af0>] historic events
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [progress INFO root] Loaded OSDMap, ready.
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] recovery thread starting
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] starting setup
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: rbd_support
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: restful
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: status
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: telemetry
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [restful INFO root] server_addr: :: server_port: 8003
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [restful WARNING root] server not running: no certificate configured
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"} v 0)
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"}]: dispatch
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: volumes
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] PerfHandler: starting
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_task_task: vms, start_after=
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_task_task: volumes, start_after=
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_task_task: backups, start_after=
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_task_task: images, start_after=
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TaskHandler: starting
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.swlvov restarted
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.swlvov started
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"} v 0)
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"}]: dispatch
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: Active manager daemon compute-0.ixicfj restarted
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: Activating manager daemon compute-0.ixicfj
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: Manager daemon compute-0.ixicfj is now available
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"}]: dispatch
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] setup complete
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Oct  8 05:46:13 np0005475493 systemd[1]: Created slice User Slice of UID 42477.
Oct  8 05:46:13 np0005475493 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct  8 05:46:13 np0005475493 systemd-logind[798]: New session 36 of user ceph-admin.
Oct  8 05:46:13 np0005475493 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct  8 05:46:13 np0005475493 systemd[1]: Starting User Manager for UID 42477...
Oct  8 05:46:13 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.module] Engine started.
Oct  8 05:46:13 np0005475493 systemd[92032]: Queued start job for default target Main User Target.
Oct  8 05:46:13 np0005475493 systemd[92032]: Created slice User Application Slice.
Oct  8 05:46:13 np0005475493 systemd[92032]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  8 05:46:13 np0005475493 systemd[92032]: Started Daily Cleanup of User's Temporary Directories.
Oct  8 05:46:13 np0005475493 systemd[92032]: Reached target Paths.
Oct  8 05:46:13 np0005475493 systemd[92032]: Reached target Timers.
Oct  8 05:46:13 np0005475493 systemd[92032]: Starting D-Bus User Message Bus Socket...
Oct  8 05:46:13 np0005475493 systemd[92032]: Starting Create User's Volatile Files and Directories...
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.mtagwx restarted
Oct  8 05:46:13 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.mtagwx started
Oct  8 05:46:13 np0005475493 systemd[92032]: Listening on D-Bus User Message Bus Socket.
Oct  8 05:46:13 np0005475493 systemd[92032]: Reached target Sockets.
Oct  8 05:46:13 np0005475493 systemd[92032]: Finished Create User's Volatile Files and Directories.
Oct  8 05:46:13 np0005475493 systemd[92032]: Reached target Basic System.
Oct  8 05:46:13 np0005475493 systemd[92032]: Reached target Main User Target.
Oct  8 05:46:13 np0005475493 systemd[92032]: Startup finished in 120ms.
Oct  8 05:46:13 np0005475493 systemd[1]: Started User Manager for UID 42477.
Oct  8 05:46:13 np0005475493 systemd[1]: Started Session 36 of User ceph-admin.
Oct  8 05:46:14 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Oct  8 05:46:14 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.ixicfj(active, since 1.05266s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct  8 05:46:14 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14469 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 05:46:14 np0005475493 ceph-mgr[73869]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct  8 05:46:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0[73568]: 2025-10-08T09:46:14.190+0000 7f533f3cb640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  8 05:46:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v3: 197 pgs: 1 active+clean+scrubbing, 196 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e2 new map
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2025-10-08T09:46:14:191872+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-08T09:46:14.191787+0000#012modified#0112025-10-08T09:46:14.191787+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Oct  8 05:46:14 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  8 05:46:14 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:14 np0005475493 ceph-mgr[73869]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct  8 05:46:14 np0005475493 systemd[1]: libpod-dabe41657d799e8f67b8f9a54c7c383817634f645f27fe32a289f73e6e9a1bca.scope: Deactivated successfully.
Oct  8 05:46:14 np0005475493 podman[91829]: 2025-10-08 09:46:14.258552618 +0000 UTC m=+10.281620462 container died dabe41657d799e8f67b8f9a54c7c383817634f645f27fe32a289f73e6e9a1bca (image=quay.io/ceph/ceph:v19, name=exciting_kirch, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  8 05:46:14 np0005475493 systemd[1]: var-lib-containers-storage-overlay-97a33d61aba89fc689df1b2cd2819fd6083f25731ce33ae8ce6a26a47e7a3d4a-merged.mount: Deactivated successfully.
Oct  8 05:46:14 np0005475493 podman[91829]: 2025-10-08 09:46:14.31675807 +0000 UTC m=+10.339825904 container remove dabe41657d799e8f67b8f9a54c7c383817634f645f27fe32a289f73e6e9a1bca (image=quay.io/ceph/ceph:v19, name=exciting_kirch, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:46:14 np0005475493 systemd[1]: libpod-conmon-dabe41657d799e8f67b8f9a54c7c383817634f645f27fe32a289f73e6e9a1bca.scope: Deactivated successfully.
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"}]: dispatch
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct  8 05:46:14 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:14 np0005475493 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:46:14] ENGINE Bus STARTING
Oct  8 05:46:14 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:46:14] ENGINE Bus STARTING
Oct  8 05:46:14 np0005475493 podman[92207]: 2025-10-08 09:46:14.598463438 +0000 UTC m=+0.056868613 container exec 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct  8 05:46:14 np0005475493 python3[92206]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:46:14 np0005475493 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:46:14] ENGINE Serving on https://192.168.122.100:7150
Oct  8 05:46:14 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:46:14] ENGINE Serving on https://192.168.122.100:7150
Oct  8 05:46:14 np0005475493 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:46:14] ENGINE Client ('192.168.122.100', 46310) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  8 05:46:14 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:46:14] ENGINE Client ('192.168.122.100', 46310) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  8 05:46:14 np0005475493 podman[92240]: 2025-10-08 09:46:14.70229179 +0000 UTC m=+0.045805256 container create c2e8bc1b2d0806c75c381c887df5cb4f0ff93d7366a6b62f936da5559412303f (image=quay.io/ceph/ceph:v19, name=nice_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  8 05:46:14 np0005475493 podman[92207]: 2025-10-08 09:46:14.733128269 +0000 UTC m=+0.191533444 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  8 05:46:14 np0005475493 systemd[1]: Started libpod-conmon-c2e8bc1b2d0806c75c381c887df5cb4f0ff93d7366a6b62f936da5559412303f.scope.
Oct  8 05:46:14 np0005475493 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:46:14] ENGINE Serving on http://192.168.122.100:8765
Oct  8 05:46:14 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:46:14] ENGINE Serving on http://192.168.122.100:8765
Oct  8 05:46:14 np0005475493 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:46:14] ENGINE Bus STARTED
Oct  8 05:46:14 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:46:14] ENGINE Bus STARTED
Oct  8 05:46:14 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:46:14 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdc32ff34607e7de23007d8104f7c4bc37dad9bd2483cbe6d3cdb37a01ac3b7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:14 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdc32ff34607e7de23007d8104f7c4bc37dad9bd2483cbe6d3cdb37a01ac3b7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:14 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdc32ff34607e7de23007d8104f7c4bc37dad9bd2483cbe6d3cdb37a01ac3b7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:14 np0005475493 podman[92240]: 2025-10-08 09:46:14.68425131 +0000 UTC m=+0.027764826 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:46:14 np0005475493 podman[92240]: 2025-10-08 09:46:14.785574555 +0000 UTC m=+0.129088031 container init c2e8bc1b2d0806c75c381c887df5cb4f0ff93d7366a6b62f936da5559412303f (image=quay.io/ceph/ceph:v19, name=nice_ramanujan, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:46:14 np0005475493 podman[92240]: 2025-10-08 09:46:14.791792485 +0000 UTC m=+0.135305961 container start c2e8bc1b2d0806c75c381c887df5cb4f0ff93d7366a6b62f936da5559412303f (image=quay.io/ceph/ceph:v19, name=nice_ramanujan, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct  8 05:46:14 np0005475493 podman[92240]: 2025-10-08 09:46:14.79528396 +0000 UTC m=+0.138797476 container attach c2e8bc1b2d0806c75c381c887df5cb4f0ff93d7366a6b62f936da5559412303f (image=quay.io/ceph/ceph:v19, name=nice_ramanujan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True)
Oct  8 05:46:15 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Oct  8 05:46:15 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Oct  8 05:46:15 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14502 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 05:46:15 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  8 05:46:15 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  8 05:46:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct  8 05:46:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:15 np0005475493 nice_ramanujan[92277]: Scheduled mds.cephfs update...
Oct  8 05:46:15 np0005475493 systemd[1]: libpod-c2e8bc1b2d0806c75c381c887df5cb4f0ff93d7366a6b62f936da5559412303f.scope: Deactivated successfully.
Oct  8 05:46:15 np0005475493 podman[92240]: 2025-10-08 09:46:15.167748302 +0000 UTC m=+0.511261768 container died c2e8bc1b2d0806c75c381c887df5cb4f0ff93d7366a6b62f936da5559412303f (image=quay.io/ceph/ceph:v19, name=nice_ramanujan, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:46:15 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v5: 197 pgs: 1 active+clean+scrubbing, 196 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:46:15 np0005475493 systemd[1]: var-lib-containers-storage-overlay-4fdc32ff34607e7de23007d8104f7c4bc37dad9bd2483cbe6d3cdb37a01ac3b7-merged.mount: Deactivated successfully.
Oct  8 05:46:15 np0005475493 podman[92240]: 2025-10-08 09:46:15.207851573 +0000 UTC m=+0.551365039 container remove c2e8bc1b2d0806c75c381c887df5cb4f0ff93d7366a6b62f936da5559412303f (image=quay.io/ceph/ceph:v19, name=nice_ramanujan, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:46:15 np0005475493 systemd[1]: libpod-conmon-c2e8bc1b2d0806c75c381c887df5cb4f0ff93d7366a6b62f936da5559412303f.scope: Deactivated successfully.
Oct  8 05:46:15 np0005475493 podman[92412]: 2025-10-08 09:46:15.242064935 +0000 UTC m=+0.046283871 container exec 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:46:15 np0005475493 podman[92412]: 2025-10-08 09:46:15.250278065 +0000 UTC m=+0.054496981 container exec_died 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:46:15 np0005475493 ceph-mgr[73869]: [devicehealth INFO root] Check health
Oct  8 05:46:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:46:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:46:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:15 np0005475493 ceph-mon[73572]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  8 05:46:15 np0005475493 ceph-mon[73572]: [08/Oct/2025:09:46:14] ENGINE Bus STARTING
Oct  8 05:46:15 np0005475493 ceph-mon[73572]: [08/Oct/2025:09:46:14] ENGINE Serving on https://192.168.122.100:7150
Oct  8 05:46:15 np0005475493 ceph-mon[73572]: [08/Oct/2025:09:46:14] ENGINE Client ('192.168.122.100', 46310) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  8 05:46:15 np0005475493 ceph-mon[73572]: [08/Oct/2025:09:46:14] ENGINE Serving on http://192.168.122.100:8765
Oct  8 05:46:15 np0005475493 ceph-mon[73572]: [08/Oct/2025:09:46:14] ENGINE Bus STARTED
Oct  8 05:46:15 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:15 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:15 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:46:15 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.ixicfj(active, since 2s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct  8 05:46:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:46:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:46:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:46:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:15 np0005475493 python3[92514]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:46:15 np0005475493 podman[92541]: 2025-10-08 09:46:15.587685329 +0000 UTC m=+0.061137683 container create 351f425ba01bd7f6ba22abaa69d5b66c000b02436de67e4c0db528c1a09fedb0 (image=quay.io/ceph/ceph:v19, name=laughing_elbakyan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:46:15 np0005475493 systemd[1]: Started libpod-conmon-351f425ba01bd7f6ba22abaa69d5b66c000b02436de67e4c0db528c1a09fedb0.scope.
Oct  8 05:46:15 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:46:15 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/702428090e6532e49ec22bf0dde632ad0302dd9e75332683e974d41e17512554/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:15 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/702428090e6532e49ec22bf0dde632ad0302dd9e75332683e974d41e17512554/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:15 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/702428090e6532e49ec22bf0dde632ad0302dd9e75332683e974d41e17512554/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:15 np0005475493 podman[92541]: 2025-10-08 09:46:15.563441881 +0000 UTC m=+0.036894255 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:46:15 np0005475493 podman[92541]: 2025-10-08 09:46:15.690943324 +0000 UTC m=+0.164395678 container init 351f425ba01bd7f6ba22abaa69d5b66c000b02436de67e4c0db528c1a09fedb0 (image=quay.io/ceph/ceph:v19, name=laughing_elbakyan, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  8 05:46:15 np0005475493 podman[92541]: 2025-10-08 09:46:15.697655847 +0000 UTC m=+0.171108201 container start 351f425ba01bd7f6ba22abaa69d5b66c000b02436de67e4c0db528c1a09fedb0 (image=quay.io/ceph/ceph:v19, name=laughing_elbakyan, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:46:15 np0005475493 podman[92541]: 2025-10-08 09:46:15.70791926 +0000 UTC m=+0.181371644 container attach 351f425ba01bd7f6ba22abaa69d5b66c000b02436de67e4c0db528c1a09fedb0 (image=quay.io/ceph/ceph:v19, name=laughing_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  8 05:46:16 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14514 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Oct  8 05:46:16 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Oct  8 05:46:16 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 05:46:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:46:16 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct  8 05:46:16 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct  8 05:46:16 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct  8 05:46:16 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct  8 05:46:16 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct  8 05:46:16 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct  8 05:46:17 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.19 deep-scrub starts
Oct  8 05:46:17 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 3.19 deep-scrub ok
Oct  8 05:46:17 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:46:17 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:46:17 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:46:17 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:46:17 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v7: 198 pgs: 1 unknown, 1 active+clean+scrubbing, 196 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:46:17 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:46:17 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:46:17 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Oct  8 05:46:17 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Oct  8 05:46:17 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:17 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:17 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  8 05:46:17 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:17 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:17 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  8 05:46:17 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:46:17 np0005475493 ceph-mon[73572]: Updating compute-0:/etc/ceph/ceph.conf
Oct  8 05:46:17 np0005475493 ceph-mon[73572]: Updating compute-1:/etc/ceph/ceph.conf
Oct  8 05:46:17 np0005475493 ceph-mon[73572]: Updating compute-2:/etc/ceph/ceph.conf
Oct  8 05:46:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Oct  8 05:46:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Oct  8 05:46:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Oct  8 05:46:17 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Oct  8 05:46:17 np0005475493 ceph-mgr[73869]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Oct  8 05:46:17 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct  8 05:46:17 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct  8 05:46:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 05:46:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:17 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct  8 05:46:17 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct  8 05:46:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct  8 05:46:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:17 np0005475493 systemd[1]: libpod-351f425ba01bd7f6ba22abaa69d5b66c000b02436de67e4c0db528c1a09fedb0.scope: Deactivated successfully.
Oct  8 05:46:17 np0005475493 podman[92541]: 2025-10-08 09:46:17.55359755 +0000 UTC m=+2.027049914 container died 351f425ba01bd7f6ba22abaa69d5b66c000b02436de67e4c0db528c1a09fedb0 (image=quay.io/ceph/ceph:v19, name=laughing_elbakyan, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:46:17 np0005475493 systemd[1]: var-lib-containers-storage-overlay-702428090e6532e49ec22bf0dde632ad0302dd9e75332683e974d41e17512554-merged.mount: Deactivated successfully.
Oct  8 05:46:17 np0005475493 podman[92541]: 2025-10-08 09:46:17.610639047 +0000 UTC m=+2.084091391 container remove 351f425ba01bd7f6ba22abaa69d5b66c000b02436de67e4c0db528c1a09fedb0 (image=quay.io/ceph/ceph:v19, name=laughing_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:46:17 np0005475493 systemd[1]: libpod-conmon-351f425ba01bd7f6ba22abaa69d5b66c000b02436de67e4c0db528c1a09fedb0.scope: Deactivated successfully.
Oct  8 05:46:17 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:46:17 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:46:17 np0005475493 ceph-mon[73572]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  8 05:46:17 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.ixicfj(active, since 4s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct  8 05:46:17 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:46:17 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:46:17 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:46:17 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:46:18 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:46:18 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:46:18 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Oct  8 05:46:18 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Oct  8 05:46:18 np0005475493 python3[93470]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  8 05:46:18 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:46:18 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:46:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:46:18 np0005475493 ceph-mon[73572]: Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:46:18 np0005475493 ceph-mon[73572]: Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:46:18 np0005475493 ceph-mon[73572]: Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:46:18 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Oct  8 05:46:18 np0005475493 ceph-mon[73572]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct  8 05:46:18 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:18 np0005475493 ceph-mon[73572]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct  8 05:46:18 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:18 np0005475493 ceph-mon[73572]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:46:18 np0005475493 ceph-mon[73572]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  8 05:46:18 np0005475493 ceph-mon[73572]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:46:18 np0005475493 ceph-mon[73572]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:46:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Oct  8 05:46:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Oct  8 05:46:18 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Oct  8 05:46:18 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:46:18 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:46:18 np0005475493 python3[93668]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759916778.0132196-33877-171923265403929/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=fbda66f5b6d5a9cd8683861e87e5a427d546a56c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:46:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:46:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:46:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:46:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:46:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:19 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Oct  8 05:46:19 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Oct  8 05:46:19 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v10: 198 pgs: 1 unknown, 1 active+clean+scrubbing, 196 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:46:19 np0005475493 python3[93795]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:46:19 np0005475493 podman[93796]: 2025-10-08 09:46:19.233375788 +0000 UTC m=+0.035820772 container create a1253b7c4649742bb57e64eb0bf6f3f971978cc00e50c9a5971a53257a867853 (image=quay.io/ceph/ceph:v19, name=cool_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:46:19 np0005475493 systemd[1]: Started libpod-conmon-a1253b7c4649742bb57e64eb0bf6f3f971978cc00e50c9a5971a53257a867853.scope.
Oct  8 05:46:19 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:46:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26ad636ab329994b1c4c7907ad9cdc3a74f2596845ac49c969a93033d9e37914/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26ad636ab329994b1c4c7907ad9cdc3a74f2596845ac49c969a93033d9e37914/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:19 np0005475493 podman[93796]: 2025-10-08 09:46:19.297265863 +0000 UTC m=+0.099710867 container init a1253b7c4649742bb57e64eb0bf6f3f971978cc00e50c9a5971a53257a867853 (image=quay.io/ceph/ceph:v19, name=cool_mendeleev, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  8 05:46:19 np0005475493 podman[93796]: 2025-10-08 09:46:19.303089801 +0000 UTC m=+0.105534795 container start a1253b7c4649742bb57e64eb0bf6f3f971978cc00e50c9a5971a53257a867853 (image=quay.io/ceph/ceph:v19, name=cool_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  8 05:46:19 np0005475493 podman[93796]: 2025-10-08 09:46:19.305978428 +0000 UTC m=+0.108423432 container attach a1253b7c4649742bb57e64eb0bf6f3f971978cc00e50c9a5971a53257a867853 (image=quay.io/ceph/ceph:v19, name=cool_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  8 05:46:19 np0005475493 podman[93796]: 2025-10-08 09:46:19.217822595 +0000 UTC m=+0.020267599 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:46:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:46:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:46:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 05:46:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:19 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev fa47b4b8-0b6b-448e-9fbe-e0e5cc5c6311 (Updating node-exporter deployment (+1 -> 3))
Oct  8 05:46:19 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Oct  8 05:46:19 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Oct  8 05:46:19 np0005475493 ceph-mon[73572]: Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:46:19 np0005475493 ceph-mon[73572]: Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:46:19 np0005475493 ceph-mon[73572]: Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:46:19 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:19 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:19 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:19 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:19 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:19 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:19 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0)
Oct  8 05:46:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2482379184' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct  8 05:46:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2482379184' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct  8 05:46:19 np0005475493 systemd[1]: libpod-a1253b7c4649742bb57e64eb0bf6f3f971978cc00e50c9a5971a53257a867853.scope: Deactivated successfully.
Oct  8 05:46:19 np0005475493 podman[93796]: 2025-10-08 09:46:19.737986733 +0000 UTC m=+0.540431717 container died a1253b7c4649742bb57e64eb0bf6f3f971978cc00e50c9a5971a53257a867853 (image=quay.io/ceph/ceph:v19, name=cool_mendeleev, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  8 05:46:19 np0005475493 systemd[1]: var-lib-containers-storage-overlay-26ad636ab329994b1c4c7907ad9cdc3a74f2596845ac49c969a93033d9e37914-merged.mount: Deactivated successfully.
Oct  8 05:46:19 np0005475493 podman[93796]: 2025-10-08 09:46:19.769897465 +0000 UTC m=+0.572342449 container remove a1253b7c4649742bb57e64eb0bf6f3f971978cc00e50c9a5971a53257a867853 (image=quay.io/ceph/ceph:v19, name=cool_mendeleev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:46:19 np0005475493 systemd[1]: libpod-conmon-a1253b7c4649742bb57e64eb0bf6f3f971978cc00e50c9a5971a53257a867853.scope: Deactivated successfully.
Oct  8 05:46:19 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  8 05:46:20 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.ixicfj(active, since 6s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct  8 05:46:20 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 2.e scrub starts
Oct  8 05:46:20 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 2.e scrub ok
Oct  8 05:46:20 np0005475493 ceph-mon[73572]: Deploying daemon node-exporter.compute-2 on compute-2
Oct  8 05:46:20 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/2482379184' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct  8 05:46:20 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/2482379184' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct  8 05:46:20 np0005475493 ceph-mon[73572]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  8 05:46:20 np0005475493 python3[93873]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:46:20 np0005475493 podman[93875]: 2025-10-08 09:46:20.653204731 +0000 UTC m=+0.035637826 container create 61a06bc8a1dbf0a3687f0a052e491dfba688e1cd9573e0b646abcc94f6b0ee22 (image=quay.io/ceph/ceph:v19, name=exciting_villani, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct  8 05:46:20 np0005475493 systemd[1]: Started libpod-conmon-61a06bc8a1dbf0a3687f0a052e491dfba688e1cd9573e0b646abcc94f6b0ee22.scope.
Oct  8 05:46:20 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:46:20 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab3bf26f101868acfe57a61080a4308d5b933b4ebebc6f84ed90d7e50597080/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:20 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab3bf26f101868acfe57a61080a4308d5b933b4ebebc6f84ed90d7e50597080/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:20 np0005475493 podman[93875]: 2025-10-08 09:46:20.73099296 +0000 UTC m=+0.113426065 container init 61a06bc8a1dbf0a3687f0a052e491dfba688e1cd9573e0b646abcc94f6b0ee22 (image=quay.io/ceph/ceph:v19, name=exciting_villani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct  8 05:46:20 np0005475493 podman[93875]: 2025-10-08 09:46:20.639162274 +0000 UTC m=+0.021595389 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:46:20 np0005475493 podman[93875]: 2025-10-08 09:46:20.735642922 +0000 UTC m=+0.118076017 container start 61a06bc8a1dbf0a3687f0a052e491dfba688e1cd9573e0b646abcc94f6b0ee22 (image=quay.io/ceph/ceph:v19, name=exciting_villani, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  8 05:46:20 np0005475493 podman[93875]: 2025-10-08 09:46:20.739014424 +0000 UTC m=+0.121447539 container attach 61a06bc8a1dbf0a3687f0a052e491dfba688e1cd9573e0b646abcc94f6b0ee22 (image=quay.io/ceph/ceph:v19, name=exciting_villani, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:46:21 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct  8 05:46:21 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3095387835' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  8 05:46:21 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 2.19 deep-scrub starts
Oct  8 05:46:21 np0005475493 exciting_villani[93891]: 
Oct  8 05:46:21 np0005475493 exciting_villani[93891]: {"fsid":"787292cc-8154-50c4-9e00-e9be3e817149","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":69,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":51,"num_osds":3,"num_up_osds":3,"osd_up_since":1759916737,"num_in_osds":3,"osd_in_since":1759916717,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":196},{"state_name":"active+clean+scrubbing","count":1},{"state_name":"unknown","count":1}],"num_pgs":198,"num_pools":12,"num_objects":194,"data_bytes":464595,"bytes_used":88862720,"bytes_avail":64323063808,"bytes_total":64411926528,"unknown_pgs_ratio":0.0050505050458014011},"fsmap":{"epoch":2,"btime":"2025-10-08T09:46:14:191872+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":5,"modified":"2025-10-08T09:45:54.969307+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.ixicfj":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.swlvov":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.mtagwx":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14382":{"start_epoch":5,"start_stamp":"2025-10-08T09:45:54.959975+0000","gid":14382,"addr":"192.168.122.100:0/4157537618","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.wdkdxi","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025","kernel_version":"5.14.0-620.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864104","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"246b4a69-3c1d-47ce-b182-d12a3d96d3e3","zone_name":"default","zonegroup_id":"3218c688-50d3-4b3d-9517-1c08371b4e2e","zonegroup_name":"default"},"task_status":{}},"24146":{"start_epoch":5,"start_stamp":"2025-10-08T09:45:54.963319+0000","gid":24146,"addr":"192.168.122.101:0/1900470648","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.aaugis","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025","kernel_version":"5.14.0-620.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864104","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"246b4a69-3c1d-47ce-b182-d12a3d96d3e3","zone_name":"default","zonegroup_id":"3218c688-50d3-4b3d-9517-1c08371b4e2e","zonegroup_name":"default"},"task_status":{}},"24148":{"start_epoch":5,"start_stamp":"2025-10-08T09:45:54.967024+0000","gid":24148,"addr":"192.168.122.102:0/4200026288","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.pgshil","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025","kernel_version":"5.14.0-620.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864104","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"246b4a69-3c1d-47ce-b182-d12a3d96d3e3","zone_name":"default","zonegroup_id":"3218c688-50d3-4b3d-9517-1c08371b4e2e","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{}}
Oct  8 05:46:21 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 2.19 deep-scrub ok
Oct  8 05:46:21 np0005475493 systemd[1]: libpod-61a06bc8a1dbf0a3687f0a052e491dfba688e1cd9573e0b646abcc94f6b0ee22.scope: Deactivated successfully.
Oct  8 05:46:21 np0005475493 podman[93875]: 2025-10-08 09:46:21.167309575 +0000 UTC m=+0.549742670 container died 61a06bc8a1dbf0a3687f0a052e491dfba688e1cd9573e0b646abcc94f6b0ee22 (image=quay.io/ceph/ceph:v19, name=exciting_villani, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Oct  8 05:46:21 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v11: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Oct  8 05:46:21 np0005475493 systemd[1]: var-lib-containers-storage-overlay-2ab3bf26f101868acfe57a61080a4308d5b933b4ebebc6f84ed90d7e50597080-merged.mount: Deactivated successfully.
Oct  8 05:46:21 np0005475493 podman[93875]: 2025-10-08 09:46:21.202437935 +0000 UTC m=+0.584871030 container remove 61a06bc8a1dbf0a3687f0a052e491dfba688e1cd9573e0b646abcc94f6b0ee22 (image=quay.io/ceph/ceph:v19, name=exciting_villani, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct  8 05:46:21 np0005475493 systemd[1]: libpod-conmon-61a06bc8a1dbf0a3687f0a052e491dfba688e1cd9573e0b646abcc94f6b0ee22.scope: Deactivated successfully.
Oct  8 05:46:21 np0005475493 python3[93954]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:46:21 np0005475493 podman[93955]: 2025-10-08 09:46:21.634071888 +0000 UTC m=+0.051738917 container create 6dccf6167f7285daa0d05406d52d8c7bb8b0d24fda9dd792dbeffd80ac58e125 (image=quay.io/ceph/ceph:v19, name=nice_mclaren, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:46:21 np0005475493 systemd[1]: Started libpod-conmon-6dccf6167f7285daa0d05406d52d8c7bb8b0d24fda9dd792dbeffd80ac58e125.scope.
Oct  8 05:46:21 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:46:21 np0005475493 podman[93955]: 2025-10-08 09:46:21.606833158 +0000 UTC m=+0.024500287 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:46:21 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb1840126ae369fb14f36086211c4a7bf670db675faf25e86afe68c987c6c5c6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:21 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb1840126ae369fb14f36086211c4a7bf670db675faf25e86afe68c987c6c5c6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:21 np0005475493 podman[93955]: 2025-10-08 09:46:21.710645899 +0000 UTC m=+0.128312928 container init 6dccf6167f7285daa0d05406d52d8c7bb8b0d24fda9dd792dbeffd80ac58e125 (image=quay.io/ceph/ceph:v19, name=nice_mclaren, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  8 05:46:21 np0005475493 podman[93955]: 2025-10-08 09:46:21.717486548 +0000 UTC m=+0.135153577 container start 6dccf6167f7285daa0d05406d52d8c7bb8b0d24fda9dd792dbeffd80ac58e125 (image=quay.io/ceph/ceph:v19, name=nice_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True)
Oct  8 05:46:21 np0005475493 podman[93955]: 2025-10-08 09:46:21.720448588 +0000 UTC m=+0.138115637 container attach 6dccf6167f7285daa0d05406d52d8c7bb8b0d24fda9dd792dbeffd80ac58e125 (image=quay.io/ceph/ceph:v19, name=nice_mclaren, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct  8 05:46:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  8 05:46:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/393958427' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  8 05:46:22 np0005475493 nice_mclaren[93970]: 
Oct  8 05:46:22 np0005475493 nice_mclaren[93970]: {"epoch":3,"fsid":"787292cc-8154-50c4-9e00-e9be3e817149","modified":"2025-10-08T09:45:06.514939Z","created":"2025-10-08T09:42:59.307631Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Oct  8 05:46:22 np0005475493 nice_mclaren[93970]: dumped monmap epoch 3
Oct  8 05:46:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:46:22 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Oct  8 05:46:22 np0005475493 systemd[1]: libpod-6dccf6167f7285daa0d05406d52d8c7bb8b0d24fda9dd792dbeffd80ac58e125.scope: Deactivated successfully.
Oct  8 05:46:22 np0005475493 podman[93955]: 2025-10-08 09:46:22.166508441 +0000 UTC m=+0.584175510 container died 6dccf6167f7285daa0d05406d52d8c7bb8b0d24fda9dd792dbeffd80ac58e125 (image=quay.io/ceph/ceph:v19, name=nice_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:46:22 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Oct  8 05:46:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:46:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Oct  8 05:46:22 np0005475493 systemd[1]: var-lib-containers-storage-overlay-cb1840126ae369fb14f36086211c4a7bf670db675faf25e86afe68c987c6c5c6-merged.mount: Deactivated successfully.
Oct  8 05:46:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:22 np0005475493 ceph-mgr[73869]: [progress INFO root] complete: finished ev fa47b4b8-0b6b-448e-9fbe-e0e5cc5c6311 (Updating node-exporter deployment (+1 -> 3))
Oct  8 05:46:22 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event fa47b4b8-0b6b-448e-9fbe-e0e5cc5c6311 (Updating node-exporter deployment (+1 -> 3)) in 3 seconds
Oct  8 05:46:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Oct  8 05:46:22 np0005475493 podman[93955]: 2025-10-08 09:46:22.210861691 +0000 UTC m=+0.628528720 container remove 6dccf6167f7285daa0d05406d52d8c7bb8b0d24fda9dd792dbeffd80ac58e125 (image=quay.io/ceph/ceph:v19, name=nice_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:46:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:22 np0005475493 systemd[1]: libpod-conmon-6dccf6167f7285daa0d05406d52d8c7bb8b0d24fda9dd792dbeffd80ac58e125.scope: Deactivated successfully.
Oct  8 05:46:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 05:46:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 05:46:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 05:46:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:46:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:46:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:46:22 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:22 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:22 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:22 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:22 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:46:22 np0005475493 podman[94102]: 2025-10-08 09:46:22.704189362 +0000 UTC m=+0.042517615 container create c53651af6f462b9915b22b71f83d84e236259766e57295b3be8107c17f4a4b8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_tu, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  8 05:46:22 np0005475493 systemd[1]: Started libpod-conmon-c53651af6f462b9915b22b71f83d84e236259766e57295b3be8107c17f4a4b8a.scope.
Oct  8 05:46:22 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:46:22 np0005475493 podman[94102]: 2025-10-08 09:46:22.771960556 +0000 UTC m=+0.110288839 container init c53651af6f462b9915b22b71f83d84e236259766e57295b3be8107c17f4a4b8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_tu, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct  8 05:46:22 np0005475493 podman[94102]: 2025-10-08 09:46:22.777241997 +0000 UTC m=+0.115570290 container start c53651af6f462b9915b22b71f83d84e236259766e57295b3be8107c17f4a4b8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:46:22 np0005475493 wonderful_tu[94137]: 167 167
Oct  8 05:46:22 np0005475493 podman[94102]: 2025-10-08 09:46:22.780453585 +0000 UTC m=+0.118781858 container attach c53651af6f462b9915b22b71f83d84e236259766e57295b3be8107c17f4a4b8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  8 05:46:22 np0005475493 systemd[1]: libpod-c53651af6f462b9915b22b71f83d84e236259766e57295b3be8107c17f4a4b8a.scope: Deactivated successfully.
Oct  8 05:46:22 np0005475493 podman[94102]: 2025-10-08 09:46:22.781120035 +0000 UTC m=+0.119448308 container died c53651af6f462b9915b22b71f83d84e236259766e57295b3be8107c17f4a4b8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_tu, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  8 05:46:22 np0005475493 podman[94102]: 2025-10-08 09:46:22.688205466 +0000 UTC m=+0.026533749 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:46:22 np0005475493 systemd[1]: var-lib-containers-storage-overlay-a57387f6853435a975617d9204bc9067d3e2ccb2e60056c2b325ba9007614056-merged.mount: Deactivated successfully.
Oct  8 05:46:22 np0005475493 python3[94132]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:46:22 np0005475493 podman[94102]: 2025-10-08 09:46:22.815531833 +0000 UTC m=+0.153860086 container remove c53651af6f462b9915b22b71f83d84e236259766e57295b3be8107c17f4a4b8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_tu, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:46:22 np0005475493 systemd[1]: libpod-conmon-c53651af6f462b9915b22b71f83d84e236259766e57295b3be8107c17f4a4b8a.scope: Deactivated successfully.
Oct  8 05:46:22 np0005475493 podman[94152]: 2025-10-08 09:46:22.862318758 +0000 UTC m=+0.034797741 container create cf05b6ed32212d8b03be6b93d58967f1fc48f1661abc911dddeed976b8507fc3 (image=quay.io/ceph/ceph:v19, name=cranky_elgamal, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct  8 05:46:22 np0005475493 systemd[1]: Started libpod-conmon-cf05b6ed32212d8b03be6b93d58967f1fc48f1661abc911dddeed976b8507fc3.scope.
Oct  8 05:46:22 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:46:22 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25b7fce022a9552470c7f7f8c0e600debe054529c03bebf28dcd5cb7b83a2dab/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:22 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25b7fce022a9552470c7f7f8c0e600debe054529c03bebf28dcd5cb7b83a2dab/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:22 np0005475493 podman[94152]: 2025-10-08 09:46:22.912375492 +0000 UTC m=+0.084854485 container init cf05b6ed32212d8b03be6b93d58967f1fc48f1661abc911dddeed976b8507fc3 (image=quay.io/ceph/ceph:v19, name=cranky_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:46:22 np0005475493 podman[94152]: 2025-10-08 09:46:22.917309672 +0000 UTC m=+0.089788655 container start cf05b6ed32212d8b03be6b93d58967f1fc48f1661abc911dddeed976b8507fc3 (image=quay.io/ceph/ceph:v19, name=cranky_elgamal, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:46:22 np0005475493 podman[94152]: 2025-10-08 09:46:22.920703825 +0000 UTC m=+0.093182838 container attach cf05b6ed32212d8b03be6b93d58967f1fc48f1661abc911dddeed976b8507fc3 (image=quay.io/ceph/ceph:v19, name=cranky_elgamal, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:46:22 np0005475493 podman[94152]: 2025-10-08 09:46:22.846652481 +0000 UTC m=+0.019131484 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:46:22 np0005475493 podman[94177]: 2025-10-08 09:46:22.959052543 +0000 UTC m=+0.043414943 container create ff95f0533b5987d756f53f975412f6e9e61d33571b46730b113fa567759396e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_nash, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  8 05:46:23 np0005475493 systemd[1]: Started libpod-conmon-ff95f0533b5987d756f53f975412f6e9e61d33571b46730b113fa567759396e5.scope.
Oct  8 05:46:23 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:46:23 np0005475493 podman[94177]: 2025-10-08 09:46:22.939803137 +0000 UTC m=+0.024165617 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:46:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6051f8f2e9df3927bd438811ea4d45db94e1db45d6e8848f5256697a69b0e46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6051f8f2e9df3927bd438811ea4d45db94e1db45d6e8848f5256697a69b0e46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6051f8f2e9df3927bd438811ea4d45db94e1db45d6e8848f5256697a69b0e46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6051f8f2e9df3927bd438811ea4d45db94e1db45d6e8848f5256697a69b0e46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6051f8f2e9df3927bd438811ea4d45db94e1db45d6e8848f5256697a69b0e46/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:23 np0005475493 podman[94177]: 2025-10-08 09:46:23.050526819 +0000 UTC m=+0.134889239 container init ff95f0533b5987d756f53f975412f6e9e61d33571b46730b113fa567759396e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_nash, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  8 05:46:23 np0005475493 podman[94177]: 2025-10-08 09:46:23.059364518 +0000 UTC m=+0.143726928 container start ff95f0533b5987d756f53f975412f6e9e61d33571b46730b113fa567759396e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:46:23 np0005475493 podman[94177]: 2025-10-08 09:46:23.06337718 +0000 UTC m=+0.147739620 container attach ff95f0533b5987d756f53f975412f6e9e61d33571b46730b113fa567759396e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_nash, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct  8 05:46:23 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Oct  8 05:46:23 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Oct  8 05:46:23 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v12: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 10 op/s
Oct  8 05:46:23 np0005475493 ceph-mgr[73869]: [progress INFO root] Writing back 13 completed events
Oct  8 05:46:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  8 05:46:23 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Oct  8 05:46:23 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2282328507' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct  8 05:46:23 np0005475493 cranky_elgamal[94170]: [client.openstack]
Oct  8 05:46:23 np0005475493 cranky_elgamal[94170]: #011key = AQADMuZoAAAAABAAatv7Ix+93M4zPKi4UUkwMw==
Oct  8 05:46:23 np0005475493 cranky_elgamal[94170]: #011caps mgr = "allow *"
Oct  8 05:46:23 np0005475493 cranky_elgamal[94170]: #011caps mon = "profile rbd"
Oct  8 05:46:23 np0005475493 cranky_elgamal[94170]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Oct  8 05:46:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:46:23 np0005475493 systemd[1]: libpod-cf05b6ed32212d8b03be6b93d58967f1fc48f1661abc911dddeed976b8507fc3.scope: Deactivated successfully.
Oct  8 05:46:23 np0005475493 podman[94152]: 2025-10-08 09:46:23.349619206 +0000 UTC m=+0.522098239 container died cf05b6ed32212d8b03be6b93d58967f1fc48f1661abc911dddeed976b8507fc3 (image=quay.io/ceph/ceph:v19, name=cranky_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:46:23 np0005475493 systemd[1]: var-lib-containers-storage-overlay-25b7fce022a9552470c7f7f8c0e600debe054529c03bebf28dcd5cb7b83a2dab-merged.mount: Deactivated successfully.
Oct  8 05:46:23 np0005475493 clever_nash[94195]: --> passed data devices: 0 physical, 1 LVM
Oct  8 05:46:23 np0005475493 clever_nash[94195]: --> All data devices are unavailable
Oct  8 05:46:23 np0005475493 podman[94152]: 2025-10-08 09:46:23.398513394 +0000 UTC m=+0.570992387 container remove cf05b6ed32212d8b03be6b93d58967f1fc48f1661abc911dddeed976b8507fc3 (image=quay.io/ceph/ceph:v19, name=cranky_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  8 05:46:23 np0005475493 systemd[1]: libpod-conmon-cf05b6ed32212d8b03be6b93d58967f1fc48f1661abc911dddeed976b8507fc3.scope: Deactivated successfully.
Oct  8 05:46:23 np0005475493 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  8 05:46:23 np0005475493 systemd[1]: libpod-ff95f0533b5987d756f53f975412f6e9e61d33571b46730b113fa567759396e5.scope: Deactivated successfully.
Oct  8 05:46:23 np0005475493 conmon[94195]: conmon ff95f0533b5987d756f5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff95f0533b5987d756f53f975412f6e9e61d33571b46730b113fa567759396e5.scope/container/memory.events
Oct  8 05:46:23 np0005475493 podman[94177]: 2025-10-08 09:46:23.413369267 +0000 UTC m=+0.497731687 container died ff95f0533b5987d756f53f975412f6e9e61d33571b46730b113fa567759396e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_nash, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct  8 05:46:23 np0005475493 systemd[1]: var-lib-containers-storage-overlay-d6051f8f2e9df3927bd438811ea4d45db94e1db45d6e8848f5256697a69b0e46-merged.mount: Deactivated successfully.
Oct  8 05:46:23 np0005475493 podman[94177]: 2025-10-08 09:46:23.457105949 +0000 UTC m=+0.541468349 container remove ff95f0533b5987d756f53f975412f6e9e61d33571b46730b113fa567759396e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_nash, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct  8 05:46:23 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:23 np0005475493 ceph-mon[73572]: from='client.? 192.168.122.100:0/2282328507' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct  8 05:46:23 np0005475493 systemd[1]: libpod-conmon-ff95f0533b5987d756f53f975412f6e9e61d33571b46730b113fa567759396e5.scope: Deactivated successfully.
Oct  8 05:46:23 np0005475493 podman[94343]: 2025-10-08 09:46:23.972191783 +0000 UTC m=+0.051906802 container create eed6338f80faf4bc6903ab1042a8aa54db1c4a9f139d097f3f5fea990cb78eac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:46:24 np0005475493 systemd[1]: Started libpod-conmon-eed6338f80faf4bc6903ab1042a8aa54db1c4a9f139d097f3f5fea990cb78eac.scope.
Oct  8 05:46:24 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:46:24 np0005475493 podman[94343]: 2025-10-08 09:46:24.036323776 +0000 UTC m=+0.116038785 container init eed6338f80faf4bc6903ab1042a8aa54db1c4a9f139d097f3f5fea990cb78eac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_wu, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct  8 05:46:24 np0005475493 podman[94343]: 2025-10-08 09:46:24.041633057 +0000 UTC m=+0.121348046 container start eed6338f80faf4bc6903ab1042a8aa54db1c4a9f139d097f3f5fea990cb78eac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_wu, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct  8 05:46:24 np0005475493 podman[94343]: 2025-10-08 09:46:23.947313055 +0000 UTC m=+0.027028104 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:46:24 np0005475493 exciting_wu[94359]: 167 167
Oct  8 05:46:24 np0005475493 podman[94343]: 2025-10-08 09:46:24.045227646 +0000 UTC m=+0.124942635 container attach eed6338f80faf4bc6903ab1042a8aa54db1c4a9f139d097f3f5fea990cb78eac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_wu, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:46:24 np0005475493 systemd[1]: libpod-eed6338f80faf4bc6903ab1042a8aa54db1c4a9f139d097f3f5fea990cb78eac.scope: Deactivated successfully.
Oct  8 05:46:24 np0005475493 podman[94343]: 2025-10-08 09:46:24.04631954 +0000 UTC m=+0.126034539 container died eed6338f80faf4bc6903ab1042a8aa54db1c4a9f139d097f3f5fea990cb78eac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:46:24 np0005475493 systemd[1]: var-lib-containers-storage-overlay-3e128e43ed808ced2832ce9d36899627b856d179936ab17b5d67c566bf295bbd-merged.mount: Deactivated successfully.
Oct  8 05:46:24 np0005475493 podman[94343]: 2025-10-08 09:46:24.077482969 +0000 UTC m=+0.157197968 container remove eed6338f80faf4bc6903ab1042a8aa54db1c4a9f139d097f3f5fea990cb78eac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  8 05:46:24 np0005475493 systemd[1]: libpod-conmon-eed6338f80faf4bc6903ab1042a8aa54db1c4a9f139d097f3f5fea990cb78eac.scope: Deactivated successfully.
Oct  8 05:46:24 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Oct  8 05:46:24 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Oct  8 05:46:24 np0005475493 podman[94383]: 2025-10-08 09:46:24.222392941 +0000 UTC m=+0.043753903 container create 4ddd93bb97a19ee95ed16ce443750fc78a6de957e8d2ae78aafdc92822556af5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  8 05:46:24 np0005475493 systemd[1]: Started libpod-conmon-4ddd93bb97a19ee95ed16ce443750fc78a6de957e8d2ae78aafdc92822556af5.scope.
Oct  8 05:46:24 np0005475493 podman[94383]: 2025-10-08 09:46:24.201066892 +0000 UTC m=+0.022427884 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:46:24 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:46:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a2e991ba19d5eda17944928ab745179957dc405c6df32903b23bd1f7ba6e7be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a2e991ba19d5eda17944928ab745179957dc405c6df32903b23bd1f7ba6e7be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a2e991ba19d5eda17944928ab745179957dc405c6df32903b23bd1f7ba6e7be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a2e991ba19d5eda17944928ab745179957dc405c6df32903b23bd1f7ba6e7be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:24 np0005475493 podman[94383]: 2025-10-08 09:46:24.32349683 +0000 UTC m=+0.144857782 container init 4ddd93bb97a19ee95ed16ce443750fc78a6de957e8d2ae78aafdc92822556af5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_babbage, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  8 05:46:24 np0005475493 podman[94383]: 2025-10-08 09:46:24.329360018 +0000 UTC m=+0.150720960 container start 4ddd93bb97a19ee95ed16ce443750fc78a6de957e8d2ae78aafdc92822556af5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_babbage, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  8 05:46:24 np0005475493 podman[94383]: 2025-10-08 09:46:24.33235498 +0000 UTC m=+0.153715962 container attach 4ddd93bb97a19ee95ed16ce443750fc78a6de957e8d2ae78aafdc92822556af5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]: {
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:    "1": [
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:        {
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:            "devices": [
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:                "/dev/loop3"
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:            ],
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:            "lv_name": "ceph_lv0",
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:            "lv_size": "21470642176",
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:            "name": "ceph_lv0",
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:            "tags": {
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:                "ceph.cephx_lockbox_secret": "",
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:                "ceph.cluster_name": "ceph",
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:                "ceph.crush_device_class": "",
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:                "ceph.encrypted": "0",
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:                "ceph.osd_id": "1",
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:                "ceph.type": "block",
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:                "ceph.vdo": "0",
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:                "ceph.with_tpm": "0"
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:            },
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:            "type": "block",
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:            "vg_name": "ceph_vg0"
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:        }
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]:    ]
Oct  8 05:46:24 np0005475493 inspiring_babbage[94399]: }
Oct  8 05:46:24 np0005475493 systemd[1]: libpod-4ddd93bb97a19ee95ed16ce443750fc78a6de957e8d2ae78aafdc92822556af5.scope: Deactivated successfully.
Oct  8 05:46:24 np0005475493 podman[94383]: 2025-10-08 09:46:24.652175358 +0000 UTC m=+0.473536300 container died 4ddd93bb97a19ee95ed16ce443750fc78a6de957e8d2ae78aafdc92822556af5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_babbage, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:46:24 np0005475493 systemd[1]: var-lib-containers-storage-overlay-0a2e991ba19d5eda17944928ab745179957dc405c6df32903b23bd1f7ba6e7be-merged.mount: Deactivated successfully.
Oct  8 05:46:24 np0005475493 podman[94383]: 2025-10-08 09:46:24.698172708 +0000 UTC m=+0.519533650 container remove 4ddd93bb97a19ee95ed16ce443750fc78a6de957e8d2ae78aafdc92822556af5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:46:24 np0005475493 systemd[1]: libpod-conmon-4ddd93bb97a19ee95ed16ce443750fc78a6de957e8d2ae78aafdc92822556af5.scope: Deactivated successfully.
Oct  8 05:46:25 np0005475493 ansible-async_wrapper.py[94619]: Invoked with j189820904953 30 /home/zuul/.ansible/tmp/ansible-tmp-1759916784.5890815-33949-161336562295642/AnsiballZ_command.py _
Oct  8 05:46:25 np0005475493 ansible-async_wrapper.py[94636]: Starting module and watcher
Oct  8 05:46:25 np0005475493 ansible-async_wrapper.py[94636]: Start watching 94637 (30)
Oct  8 05:46:25 np0005475493 ansible-async_wrapper.py[94637]: Start module (94637)
Oct  8 05:46:25 np0005475493 ansible-async_wrapper.py[94619]: Return async_wrapper task started.
Oct  8 05:46:25 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v13: 198 pgs: 1 active+clean+scrubbing, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Oct  8 05:46:25 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.6 deep-scrub starts
Oct  8 05:46:25 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.6 deep-scrub ok
Oct  8 05:46:25 np0005475493 podman[94665]: 2025-10-08 09:46:25.199611467 +0000 UTC m=+0.033817721 container create 491f4e91f5b64f2f77d72508d92fe79e6f56cf69c2095e699ac054e0a9d818e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_dewdney, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  8 05:46:25 np0005475493 python3[94638]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:46:25 np0005475493 systemd[1]: Started libpod-conmon-491f4e91f5b64f2f77d72508d92fe79e6f56cf69c2095e699ac054e0a9d818e9.scope.
Oct  8 05:46:25 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:46:25 np0005475493 podman[94680]: 2025-10-08 09:46:25.263809582 +0000 UTC m=+0.038313128 container create da35dd713e333e3b7befb35570e3f7c6ae1f7858402921f1c6dfef48b91b4eed (image=quay.io/ceph/ceph:v19, name=happy_engelbart, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:46:25 np0005475493 podman[94665]: 2025-10-08 09:46:25.276845188 +0000 UTC m=+0.111051462 container init 491f4e91f5b64f2f77d72508d92fe79e6f56cf69c2095e699ac054e0a9d818e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_dewdney, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  8 05:46:25 np0005475493 podman[94665]: 2025-10-08 09:46:25.185525548 +0000 UTC m=+0.019731812 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:46:25 np0005475493 podman[94665]: 2025-10-08 09:46:25.283063648 +0000 UTC m=+0.117269902 container start 491f4e91f5b64f2f77d72508d92fe79e6f56cf69c2095e699ac054e0a9d818e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  8 05:46:25 np0005475493 podman[94665]: 2025-10-08 09:46:25.286006098 +0000 UTC m=+0.120212352 container attach 491f4e91f5b64f2f77d72508d92fe79e6f56cf69c2095e699ac054e0a9d818e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_dewdney, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:46:25 np0005475493 stupefied_dewdney[94688]: 167 167
Oct  8 05:46:25 np0005475493 podman[94665]: 2025-10-08 09:46:25.287984188 +0000 UTC m=+0.122190442 container died 491f4e91f5b64f2f77d72508d92fe79e6f56cf69c2095e699ac054e0a9d818e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_dewdney, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:46:25 np0005475493 systemd[1]: Started libpod-conmon-da35dd713e333e3b7befb35570e3f7c6ae1f7858402921f1c6dfef48b91b4eed.scope.
Oct  8 05:46:25 np0005475493 systemd[1]: libpod-491f4e91f5b64f2f77d72508d92fe79e6f56cf69c2095e699ac054e0a9d818e9.scope: Deactivated successfully.
Oct  8 05:46:25 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:46:25 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b08e02c731950535b5f167d68c9e4809a283983c7a24f02e671f07cce43f97c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:25 np0005475493 systemd[1]: var-lib-containers-storage-overlay-4cde19863ecff7ac6a73587e4a65b1cb602807ccd626fae4348e0dafb24c9848-merged.mount: Deactivated successfully.
Oct  8 05:46:25 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b08e02c731950535b5f167d68c9e4809a283983c7a24f02e671f07cce43f97c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:25 np0005475493 podman[94665]: 2025-10-08 09:46:25.334188545 +0000 UTC m=+0.168394799 container remove 491f4e91f5b64f2f77d72508d92fe79e6f56cf69c2095e699ac054e0a9d818e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_dewdney, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  8 05:46:25 np0005475493 systemd[1]: libpod-conmon-491f4e91f5b64f2f77d72508d92fe79e6f56cf69c2095e699ac054e0a9d818e9.scope: Deactivated successfully.
Oct  8 05:46:25 np0005475493 podman[94680]: 2025-10-08 09:46:25.247455754 +0000 UTC m=+0.021959330 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:46:25 np0005475493 podman[94680]: 2025-10-08 09:46:25.348022946 +0000 UTC m=+0.122526492 container init da35dd713e333e3b7befb35570e3f7c6ae1f7858402921f1c6dfef48b91b4eed (image=quay.io/ceph/ceph:v19, name=happy_engelbart, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:46:25 np0005475493 podman[94680]: 2025-10-08 09:46:25.352682318 +0000 UTC m=+0.127185864 container start da35dd713e333e3b7befb35570e3f7c6ae1f7858402921f1c6dfef48b91b4eed (image=quay.io/ceph/ceph:v19, name=happy_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  8 05:46:25 np0005475493 podman[94680]: 2025-10-08 09:46:25.355583246 +0000 UTC m=+0.130086792 container attach da35dd713e333e3b7befb35570e3f7c6ae1f7858402921f1c6dfef48b91b4eed (image=quay.io/ceph/ceph:v19, name=happy_engelbart, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:46:25 np0005475493 podman[94729]: 2025-10-08 09:46:25.50907555 +0000 UTC m=+0.052677965 container create 1a5a78850054abfdcca30b1eaa484415da28672df8031bf9e9ebd40e0ed33703 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_archimedes, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:46:25 np0005475493 systemd[1]: Started libpod-conmon-1a5a78850054abfdcca30b1eaa484415da28672df8031bf9e9ebd40e0ed33703.scope.
Oct  8 05:46:25 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:46:25 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36feeff8abfca48db6d5287438fa72eb892e47e91815fc11f029fe649a5bb95b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:25 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36feeff8abfca48db6d5287438fa72eb892e47e91815fc11f029fe649a5bb95b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:25 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36feeff8abfca48db6d5287438fa72eb892e47e91815fc11f029fe649a5bb95b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:25 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36feeff8abfca48db6d5287438fa72eb892e47e91815fc11f029fe649a5bb95b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:25 np0005475493 podman[94729]: 2025-10-08 09:46:25.48936194 +0000 UTC m=+0.032964395 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:46:25 np0005475493 podman[94729]: 2025-10-08 09:46:25.591879501 +0000 UTC m=+0.135481936 container init 1a5a78850054abfdcca30b1eaa484415da28672df8031bf9e9ebd40e0ed33703 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:46:25 np0005475493 podman[94729]: 2025-10-08 09:46:25.598843773 +0000 UTC m=+0.142446198 container start 1a5a78850054abfdcca30b1eaa484415da28672df8031bf9e9ebd40e0ed33703 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_archimedes, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:46:25 np0005475493 podman[94729]: 2025-10-08 09:46:25.602052981 +0000 UTC m=+0.145655406 container attach 1a5a78850054abfdcca30b1eaa484415da28672df8031bf9e9ebd40e0ed33703 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_archimedes, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:46:25 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14550 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  8 05:46:25 np0005475493 happy_engelbart[94703]: 
Oct  8 05:46:25 np0005475493 happy_engelbart[94703]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  8 05:46:25 np0005475493 systemd[1]: libpod-da35dd713e333e3b7befb35570e3f7c6ae1f7858402921f1c6dfef48b91b4eed.scope: Deactivated successfully.
Oct  8 05:46:25 np0005475493 podman[94680]: 2025-10-08 09:46:25.787361414 +0000 UTC m=+0.561864980 container died da35dd713e333e3b7befb35570e3f7c6ae1f7858402921f1c6dfef48b91b4eed (image=quay.io/ceph/ceph:v19, name=happy_engelbart, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:46:25 np0005475493 systemd[1]: var-lib-containers-storage-overlay-1b08e02c731950535b5f167d68c9e4809a283983c7a24f02e671f07cce43f97c-merged.mount: Deactivated successfully.
Oct  8 05:46:25 np0005475493 podman[94680]: 2025-10-08 09:46:25.827717893 +0000 UTC m=+0.602221439 container remove da35dd713e333e3b7befb35570e3f7c6ae1f7858402921f1c6dfef48b91b4eed (image=quay.io/ceph/ceph:v19, name=happy_engelbart, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  8 05:46:25 np0005475493 systemd[1]: libpod-conmon-da35dd713e333e3b7befb35570e3f7c6ae1f7858402921f1c6dfef48b91b4eed.scope: Deactivated successfully.
Oct  8 05:46:25 np0005475493 ansible-async_wrapper.py[94637]: Module complete (94637)
Oct  8 05:46:26 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.2 deep-scrub starts
Oct  8 05:46:26 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.2 deep-scrub ok
Oct  8 05:46:26 np0005475493 lvm[94895]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 05:46:26 np0005475493 lvm[94895]: VG ceph_vg0 finished
Oct  8 05:46:26 np0005475493 wizardly_archimedes[94763]: {}
Oct  8 05:46:26 np0005475493 systemd[1]: libpod-1a5a78850054abfdcca30b1eaa484415da28672df8031bf9e9ebd40e0ed33703.scope: Deactivated successfully.
Oct  8 05:46:26 np0005475493 podman[94729]: 2025-10-08 09:46:26.340081914 +0000 UTC m=+0.883684339 container died 1a5a78850054abfdcca30b1eaa484415da28672df8031bf9e9ebd40e0ed33703 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_archimedes, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:46:26 np0005475493 systemd[1]: libpod-1a5a78850054abfdcca30b1eaa484415da28672df8031bf9e9ebd40e0ed33703.scope: Consumed 1.131s CPU time.
Oct  8 05:46:26 np0005475493 systemd[1]: var-lib-containers-storage-overlay-36feeff8abfca48db6d5287438fa72eb892e47e91815fc11f029fe649a5bb95b-merged.mount: Deactivated successfully.
Oct  8 05:46:26 np0005475493 podman[94729]: 2025-10-08 09:46:26.380424562 +0000 UTC m=+0.924026987 container remove 1a5a78850054abfdcca30b1eaa484415da28672df8031bf9e9ebd40e0ed33703 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:46:26 np0005475493 systemd[1]: libpod-conmon-1a5a78850054abfdcca30b1eaa484415da28672df8031bf9e9ebd40e0ed33703.scope: Deactivated successfully.
Oct  8 05:46:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:46:26 np0005475493 python3[94899]: ansible-ansible.legacy.async_status Invoked with jid=j189820904953.94619 mode=status _async_dir=/root/.ansible_async
Oct  8 05:46:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:46:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:26 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev affb329f-dae8-4723-a1e4-2bc80680611b (Updating mds.cephfs deployment (+3 -> 3))
Oct  8 05:46:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.wfaozr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Oct  8 05:46:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.wfaozr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  8 05:46:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.wfaozr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  8 05:46:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:46:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:46:26 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.wfaozr on compute-2
Oct  8 05:46:26 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.wfaozr on compute-2
Oct  8 05:46:26 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:26 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:26 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.wfaozr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  8 05:46:26 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.wfaozr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  8 05:46:26 np0005475493 python3[94961]: ansible-ansible.legacy.async_status Invoked with jid=j189820904953.94619 mode=cleanup _async_dir=/root/.ansible_async
Oct  8 05:46:27 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v14: 198 pgs: 1 active+clean+scrubbing, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Oct  8 05:46:27 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Oct  8 05:46:27 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Oct  8 05:46:27 np0005475493 python3[94987]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:46:27 np0005475493 podman[94988]: 2025-10-08 09:46:27.350704757 +0000 UTC m=+0.046015362 container create b7ec3ed55735ef6b7b299de30db9c20ae70d0ad20c7aec8dfe163a0fa0fecb13 (image=quay.io/ceph/ceph:v19, name=gifted_feynman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:46:27 np0005475493 systemd[1]: Started libpod-conmon-b7ec3ed55735ef6b7b299de30db9c20ae70d0ad20c7aec8dfe163a0fa0fecb13.scope.
Oct  8 05:46:27 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:46:27 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a144b7ec43109ff5ba632ff48c0e7e024c04b31f59d975fe33011fe35455d318/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:27 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a144b7ec43109ff5ba632ff48c0e7e024c04b31f59d975fe33011fe35455d318/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:27 np0005475493 podman[94988]: 2025-10-08 09:46:27.331717799 +0000 UTC m=+0.027028424 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:46:27 np0005475493 podman[94988]: 2025-10-08 09:46:27.440284355 +0000 UTC m=+0.135594990 container init b7ec3ed55735ef6b7b299de30db9c20ae70d0ad20c7aec8dfe163a0fa0fecb13 (image=quay.io/ceph/ceph:v19, name=gifted_feynman, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  8 05:46:27 np0005475493 podman[94988]: 2025-10-08 09:46:27.446540975 +0000 UTC m=+0.141851590 container start b7ec3ed55735ef6b7b299de30db9c20ae70d0ad20c7aec8dfe163a0fa0fecb13 (image=quay.io/ceph/ceph:v19, name=gifted_feynman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:46:27 np0005475493 podman[94988]: 2025-10-08 09:46:27.449323709 +0000 UTC m=+0.144634304 container attach b7ec3ed55735ef6b7b299de30db9c20ae70d0ad20c7aec8dfe163a0fa0fecb13 (image=quay.io/ceph/ceph:v19, name=gifted_feynman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:46:27 np0005475493 ceph-mon[73572]: Deploying daemon mds.cephfs.compute-2.wfaozr on compute-2
Oct  8 05:46:27 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14556 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  8 05:46:27 np0005475493 gifted_feynman[95004]: 
Oct  8 05:46:27 np0005475493 gifted_feynman[95004]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  8 05:46:27 np0005475493 systemd[1]: libpod-b7ec3ed55735ef6b7b299de30db9c20ae70d0ad20c7aec8dfe163a0fa0fecb13.scope: Deactivated successfully.
Oct  8 05:46:27 np0005475493 podman[94988]: 2025-10-08 09:46:27.819799841 +0000 UTC m=+0.515110456 container died b7ec3ed55735ef6b7b299de30db9c20ae70d0ad20c7aec8dfe163a0fa0fecb13 (image=quay.io/ceph/ceph:v19, name=gifted_feynman, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:46:27 np0005475493 systemd[1]: var-lib-containers-storage-overlay-a144b7ec43109ff5ba632ff48c0e7e024c04b31f59d975fe33011fe35455d318-merged.mount: Deactivated successfully.
Oct  8 05:46:27 np0005475493 podman[94988]: 2025-10-08 09:46:27.864023137 +0000 UTC m=+0.559333752 container remove b7ec3ed55735ef6b7b299de30db9c20ae70d0ad20c7aec8dfe163a0fa0fecb13 (image=quay.io/ceph/ceph:v19, name=gifted_feynman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:46:27 np0005475493 systemd[1]: libpod-conmon-b7ec3ed55735ef6b7b299de30db9c20ae70d0ad20c7aec8dfe163a0fa0fecb13.scope: Deactivated successfully.
Oct  8 05:46:28 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Oct  8 05:46:28 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Oct  8 05:46:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:46:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:46:28 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:46:28 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct  8 05:46:28 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.lphril", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Oct  8 05:46:28 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.lphril", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  8 05:46:28 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.lphril", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  8 05:46:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:46:28 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:46:28 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.lphril on compute-0
Oct  8 05:46:28 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.lphril on compute-0
Oct  8 05:46:28 np0005475493 python3[95066]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:46:28 np0005475493 podman[95090]: 2025-10-08 09:46:28.755268645 +0000 UTC m=+0.035476691 container create 2c2cf4ec0c96446c79a7fb211222232ab9c77a3bf4da1a4527cad76d6cb7993d (image=quay.io/ceph/ceph:v19, name=tender_bassi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  8 05:46:28 np0005475493 systemd[1]: Started libpod-conmon-2c2cf4ec0c96446c79a7fb211222232ab9c77a3bf4da1a4527cad76d6cb7993d.scope.
Oct  8 05:46:28 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:46:28 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a63d0bf7d1e549f61de671eac8f6938109748a4ff77f2a4317118f32d79d9a9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:28 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a63d0bf7d1e549f61de671eac8f6938109748a4ff77f2a4317118f32d79d9a9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:28 np0005475493 podman[95090]: 2025-10-08 09:46:28.818102629 +0000 UTC m=+0.098310715 container init 2c2cf4ec0c96446c79a7fb211222232ab9c77a3bf4da1a4527cad76d6cb7993d (image=quay.io/ceph/ceph:v19, name=tender_bassi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:46:28 np0005475493 podman[95090]: 2025-10-08 09:46:28.825402091 +0000 UTC m=+0.105610137 container start 2c2cf4ec0c96446c79a7fb211222232ab9c77a3bf4da1a4527cad76d6cb7993d (image=quay.io/ceph/ceph:v19, name=tender_bassi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  8 05:46:28 np0005475493 podman[95090]: 2025-10-08 09:46:28.82833929 +0000 UTC m=+0.108547366 container attach 2c2cf4ec0c96446c79a7fb211222232ab9c77a3bf4da1a4527cad76d6cb7993d (image=quay.io/ceph/ceph:v19, name=tender_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:46:28 np0005475493 podman[95090]: 2025-10-08 09:46:28.742245779 +0000 UTC m=+0.022453845 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:46:29 np0005475493 podman[95196]: 2025-10-08 09:46:29.139728702 +0000 UTC m=+0.043124644 container create 0ab24dd2d237ec8353beca4cf4a31739016a318660dd38c2125a53c181c7e66d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_payne, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct  8 05:46:29 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.f deep-scrub starts
Oct  8 05:46:29 np0005475493 systemd[1]: Started libpod-conmon-0ab24dd2d237ec8353beca4cf4a31739016a318660dd38c2125a53c181c7e66d.scope.
Oct  8 05:46:29 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v15: 198 pgs: 1 active+clean+scrubbing, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s
Oct  8 05:46:29 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.f deep-scrub ok
Oct  8 05:46:29 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14562 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  8 05:46:29 np0005475493 tender_bassi[95132]: 
Oct  8 05:46:29 np0005475493 tender_bassi[95132]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Oct  8 05:46:29 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:46:29 np0005475493 systemd[1]: libpod-2c2cf4ec0c96446c79a7fb211222232ab9c77a3bf4da1a4527cad76d6cb7993d.scope: Deactivated successfully.
Oct  8 05:46:29 np0005475493 podman[95090]: 2025-10-08 09:46:29.206044931 +0000 UTC m=+0.486252987 container died 2c2cf4ec0c96446c79a7fb211222232ab9c77a3bf4da1a4527cad76d6cb7993d (image=quay.io/ceph/ceph:v19, name=tender_bassi, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True)
Oct  8 05:46:29 np0005475493 podman[95196]: 2025-10-08 09:46:29.215905841 +0000 UTC m=+0.119301793 container init 0ab24dd2d237ec8353beca4cf4a31739016a318660dd38c2125a53c181c7e66d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_payne, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:46:29 np0005475493 podman[95196]: 2025-10-08 09:46:29.121503467 +0000 UTC m=+0.024899439 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:46:29 np0005475493 podman[95196]: 2025-10-08 09:46:29.221661096 +0000 UTC m=+0.125057038 container start 0ab24dd2d237ec8353beca4cf4a31739016a318660dd38c2125a53c181c7e66d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_payne, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  8 05:46:29 np0005475493 naughty_payne[95212]: 167 167
Oct  8 05:46:29 np0005475493 systemd[1]: libpod-0ab24dd2d237ec8353beca4cf4a31739016a318660dd38c2125a53c181c7e66d.scope: Deactivated successfully.
Oct  8 05:46:29 np0005475493 podman[95196]: 2025-10-08 09:46:29.225485623 +0000 UTC m=+0.128881565 container attach 0ab24dd2d237ec8353beca4cf4a31739016a318660dd38c2125a53c181c7e66d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct  8 05:46:29 np0005475493 podman[95196]: 2025-10-08 09:46:29.22768213 +0000 UTC m=+0.131078072 container died 0ab24dd2d237ec8353beca4cf4a31739016a318660dd38c2125a53c181c7e66d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_payne, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:46:29 np0005475493 systemd[1]: var-lib-containers-storage-overlay-3660d7d06911837baf8b2e85783911cedd80aed8baa6ab0ceb3e8aee4cd70508-merged.mount: Deactivated successfully.
Oct  8 05:46:29 np0005475493 podman[95196]: 2025-10-08 09:46:29.260770437 +0000 UTC m=+0.164166379 container remove 0ab24dd2d237ec8353beca4cf4a31739016a318660dd38c2125a53c181c7e66d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:46:29 np0005475493 systemd[1]: libpod-conmon-0ab24dd2d237ec8353beca4cf4a31739016a318660dd38c2125a53c181c7e66d.scope: Deactivated successfully.
Oct  8 05:46:29 np0005475493 systemd[1]: var-lib-containers-storage-overlay-8a63d0bf7d1e549f61de671eac8f6938109748a4ff77f2a4317118f32d79d9a9-merged.mount: Deactivated successfully.
Oct  8 05:46:29 np0005475493 podman[95090]: 2025-10-08 09:46:29.295537086 +0000 UTC m=+0.575745132 container remove 2c2cf4ec0c96446c79a7fb211222232ab9c77a3bf4da1a4527cad76d6cb7993d (image=quay.io/ceph/ceph:v19, name=tender_bassi, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  8 05:46:29 np0005475493 systemd[1]: Reloading.
Oct  8 05:46:29 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:46:29 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:46:29 np0005475493 systemd[1]: libpod-conmon-2c2cf4ec0c96446c79a7fb211222232ab9c77a3bf4da1a4527cad76d6cb7993d.scope: Deactivated successfully.
Oct  8 05:46:29 np0005475493 systemd[1]: Reloading.
Oct  8 05:46:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e3 new map
Oct  8 05:46:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2025-10-08T09:46:29:578022+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-08T09:46:14.191787+0000#012modified#0112025-10-08T09:46:14.191787+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.wfaozr{-1:24190} state up:standby seq 1 addr [v2:192.168.122.102:6804/8100770,v1:192.168.122.102:6805/8100770] compat {c=[1],r=[1],i=[1fff]}]
Oct  8 05:46:29 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/8100770,v1:192.168.122.102:6805/8100770] up:boot
Oct  8 05:46:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/8100770,v1:192.168.122.102:6805/8100770] as mds.0
Oct  8 05:46:29 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.wfaozr assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct  8 05:46:29 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct  8 05:46:29 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct  8 05:46:29 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  8 05:46:29 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Oct  8 05:46:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.wfaozr"} v 0)
Oct  8 05:46:29 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.wfaozr"}]: dispatch
Oct  8 05:46:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e3 all = 0
Oct  8 05:46:29 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:46:29 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:46:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e4 new map
Oct  8 05:46:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2025-10-08T09:46:29:619207+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-08T09:46:14.191787+0000#012modified#0112025-10-08T09:46:29.619201+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24190}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-2.wfaozr{0:24190} state up:creating seq 1 addr [v2:192.168.122.102:6804/8100770,v1:192.168.122.102:6805/8100770] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Oct  8 05:46:29 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.wfaozr=up:creating}
Oct  8 05:46:29 np0005475493 systemd[1]: Starting Ceph mds.cephfs.compute-0.lphril for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:46:29 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:29 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:29 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:29 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.lphril", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  8 05:46:29 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.lphril", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  8 05:46:29 np0005475493 ceph-mon[73572]: Deploying daemon mds.cephfs.compute-0.lphril on compute-0
Oct  8 05:46:29 np0005475493 ceph-mon[73572]: daemon mds.cephfs.compute-2.wfaozr assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct  8 05:46:29 np0005475493 ceph-mon[73572]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct  8 05:46:29 np0005475493 ceph-mon[73572]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct  8 05:46:29 np0005475493 ceph-mon[73572]: Cluster is now healthy
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.wfaozr is now active in filesystem cephfs as rank 0
Oct  8 05:46:30 np0005475493 podman[95365]: 2025-10-08 09:46:30.047546574 +0000 UTC m=+0.035267904 container create bfe144fae903e2a681026d8a7a90cabe9d3350b5cab2de2a7f7ba7544ed11e76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mds-cephfs-compute-0-lphril, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:46:30 np0005475493 ansible-async_wrapper.py[94636]: Done in kid B.
Oct  8 05:46:30 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9a539e8e9276f5c905c2ad90382c7acdeecd98bc4075c5381c73c44fb51ba5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:30 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9a539e8e9276f5c905c2ad90382c7acdeecd98bc4075c5381c73c44fb51ba5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:30 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9a539e8e9276f5c905c2ad90382c7acdeecd98bc4075c5381c73c44fb51ba5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:30 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9a539e8e9276f5c905c2ad90382c7acdeecd98bc4075c5381c73c44fb51ba5/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.lphril supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:30 np0005475493 podman[95365]: 2025-10-08 09:46:30.10879671 +0000 UTC m=+0.096518060 container init bfe144fae903e2a681026d8a7a90cabe9d3350b5cab2de2a7f7ba7544ed11e76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mds-cephfs-compute-0-lphril, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:46:30 np0005475493 podman[95365]: 2025-10-08 09:46:30.113000188 +0000 UTC m=+0.100721518 container start bfe144fae903e2a681026d8a7a90cabe9d3350b5cab2de2a7f7ba7544ed11e76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mds-cephfs-compute-0-lphril, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:46:30 np0005475493 bash[95365]: bfe144fae903e2a681026d8a7a90cabe9d3350b5cab2de2a7f7ba7544ed11e76
Oct  8 05:46:30 np0005475493 podman[95365]: 2025-10-08 09:46:30.031599699 +0000 UTC m=+0.019321049 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:46:30 np0005475493 systemd[1]: Started Ceph mds.cephfs.compute-0.lphril for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:46:30 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Oct  8 05:46:30 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Oct  8 05:46:30 np0005475493 ceph-mds[95385]: set uid:gid to 167:167 (ceph:ceph)
Oct  8 05:46:30 np0005475493 ceph-mds[95385]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Oct  8 05:46:30 np0005475493 ceph-mds[95385]: main not setting numa affinity
Oct  8 05:46:30 np0005475493 ceph-mds[95385]: pidfile_write: ignore empty --pid-file
Oct  8 05:46:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mds-cephfs-compute-0-lphril[95381]: starting mds.cephfs.compute-0.lphril at 
Oct  8 05:46:30 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Updating MDS map to version 4 from mon.2
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bumazt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bumazt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bumazt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:46:30 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.bumazt on compute-1
Oct  8 05:46:30 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.bumazt on compute-1
Oct  8 05:46:30 np0005475493 python3[95429]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:46:30 np0005475493 podman[95430]: 2025-10-08 09:46:30.409687332 +0000 UTC m=+0.037136613 container create ab7c557f1c302ba662112dbb057803724cf5e163a44422b1fe7098fb130a8b16 (image=quay.io/ceph/ceph:v19, name=youthful_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  8 05:46:30 np0005475493 systemd[1]: Started libpod-conmon-ab7c557f1c302ba662112dbb057803724cf5e163a44422b1fe7098fb130a8b16.scope.
Oct  8 05:46:30 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:46:30 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6757cb89e17999b9f3eafc01a9705ed849b0c111a2248395d8df7004baebb89c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:30 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6757cb89e17999b9f3eafc01a9705ed849b0c111a2248395d8df7004baebb89c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:30 np0005475493 podman[95430]: 2025-10-08 09:46:30.482507499 +0000 UTC m=+0.109956810 container init ab7c557f1c302ba662112dbb057803724cf5e163a44422b1fe7098fb130a8b16 (image=quay.io/ceph/ceph:v19, name=youthful_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  8 05:46:30 np0005475493 podman[95430]: 2025-10-08 09:46:30.489354488 +0000 UTC m=+0.116803769 container start ab7c557f1c302ba662112dbb057803724cf5e163a44422b1fe7098fb130a8b16 (image=quay.io/ceph/ceph:v19, name=youthful_lederberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  8 05:46:30 np0005475493 podman[95430]: 2025-10-08 09:46:30.394102097 +0000 UTC m=+0.021551388 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:46:30 np0005475493 podman[95430]: 2025-10-08 09:46:30.493000168 +0000 UTC m=+0.120449459 container attach ab7c557f1c302ba662112dbb057803724cf5e163a44422b1fe7098fb130a8b16 (image=quay.io/ceph/ceph:v19, name=youthful_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:46:30 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.14574 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  8 05:46:30 np0005475493 youthful_lederberg[95445]: 
Oct  8 05:46:30 np0005475493 youthful_lederberg[95445]: [{"container_id": "f2b90c859a73", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.13%", "created": "2025-10-08T09:43:42.269755Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-08T09:46:15.296115Z", "memory_usage": 7795113, "ports": [], "service_name": "crash", "started": "2025-10-08T09:43:42.156989Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@crash.compute-0", "version": "19.2.3"}, {"container_id": "53f09fa290e6", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.38%", "created": "2025-10-08T09:44:18.547121Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-08T09:46:15.405671Z", "memory_usage": 7821328, "ports": [], "service_name": "crash", "started": "2025-10-08T09:44:18.459704Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@crash.compute-1", "version": "19.2.3"}, {"container_id": "0965ec386585", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.30%", "created": "2025-10-08T09:45:15.118362Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-08T09:46:15.348367Z", "memory_usage": 7821328, "ports": [], "service_name": "crash", "started": "2025-10-08T09:45:15.003335Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@crash.compute-2", "version": "19.2.3"}, {"daemon_id": "cephfs.compute-0.lphril", "daemon_name": "mds.cephfs.compute-0.lphril", "daemon_type": "mds", "events": ["2025-10-08T09:46:30.188417Z daemon:mds.cephfs.compute-0.lphril [INFO] \"Deployed mds.cephfs.compute-0.lphril on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"daemon_id": "cephfs.compute-2.wfaozr", "daemon_name": "mds.cephfs.compute-2.wfaozr", "daemon_type": "mds", "events": ["2025-10-08T09:46:28.648019Z daemon:mds.cephfs.compute-2.wfaozr [INFO] \"Deployed mds.cephfs.compute-2.wfaozr on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "507427ceb179", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "27.63%", "created": "2025-10-08T09:43:06.346964Z", "daemon_id": "compute-0.ixicfj", "daemon_name": "mgr.compute-0.ixicfj", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-08T09:46:15.296017Z", "memory_usage": 541484646, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-10-08T09:43:04.895223Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@mgr.compute-0.ixicfj", "version": "19.2.3"}, {"container_id": "0003a3387a2b", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "41.13%", "created": "2025-10-08T09:45:13.213552Z", "daemon_id": "compute-1.swlvov", "daemon_name": "mgr.compute-1.swlvov", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-08T09:46:15.405937Z", "memory_usage": 504260198, "ports": [8765], "service_name": "mgr", "started": "2025-10-08T09:45:13.123094Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@mgr.compute-1.swlvov", "version": "19.2.3"}, {"container_id": "e85811784b26", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "39.33%", "created": "2025-10-08T09:45:07.513166Z", "daemon_id": "compute-2.mtagwx", "daemon_name": "mgr.compute-2.mtagwx", "daemon_type": "mgr", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-08T09:46:15.348193Z", "memory_usage": 504469913, "ports": [8765], "service_name": "mgr", "started": "2025-10-08T09:45:07.403917Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@mgr.compute-2.mtagwx", "version": "19.2.3"}, {"container_id": "01c666addd85", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "2.65%", "created": "2025-10-08T09:43:01.297917Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-08T09:46:15.295901Z", "memory_request": 2147483648, "memory_usage": 60597207, "ports": [], "service_name": "mon", "started": "2025-10-08T09:43:03.162430Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@mon.compute-0", "version": "19.2.3"}, {"container_id": "1b83aab6dc82", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.10%", "created": "2025-10-08T09:45:02.392269Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-08T09:46:15.405865Z", "memory_request": 2147483648, "memory_usage": 49744445, "ports": [], "service_name": "mon", "started": "2025-10-08T09:45:02.298589Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@mon.compute-1", "version": "19.2.3"}, {"container_id": "0af6b66ef837", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "qu
Oct  8 05:46:30 np0005475493 youthful_lederberg[95445]: s_desc": "running", "systemd_unit": "ceph-787292cc-8154-50c4-9e00-e9be3e817149@rgw.rgw.compute-2.pgshil", "version": "19.2.3"}]
Oct  8 05:46:30 np0005475493 systemd[1]: libpod-ab7c557f1c302ba662112dbb057803724cf5e163a44422b1fe7098fb130a8b16.scope: Deactivated successfully.
Oct  8 05:46:30 np0005475493 podman[95430]: 2025-10-08 09:46:30.854263028 +0000 UTC m=+0.481712309 container died ab7c557f1c302ba662112dbb057803724cf5e163a44422b1fe7098fb130a8b16 (image=quay.io/ceph/ceph:v19, name=youthful_lederberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325)
Oct  8 05:46:30 np0005475493 systemd[1]: var-lib-containers-storage-overlay-6757cb89e17999b9f3eafc01a9705ed849b0c111a2248395d8df7004baebb89c-merged.mount: Deactivated successfully.
Oct  8 05:46:30 np0005475493 podman[95430]: 2025-10-08 09:46:30.894615448 +0000 UTC m=+0.522064729 container remove ab7c557f1c302ba662112dbb057803724cf5e163a44422b1fe7098fb130a8b16 (image=quay.io/ceph/ceph:v19, name=youthful_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:46:30 np0005475493 systemd[1]: libpod-conmon-ab7c557f1c302ba662112dbb057803724cf5e163a44422b1fe7098fb130a8b16.scope: Deactivated successfully.
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: daemon mds.cephfs.compute-2.wfaozr is now active in filesystem cephfs as rank 0
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bumazt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bumazt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e5 new map
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2025-10-08T09:46:30:899414+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-08T09:46:14.191787+0000#012modified#0112025-10-08T09:46:30.899412+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24190}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 24190 members: 24190#012[mds.cephfs.compute-2.wfaozr{0:24190} state up:active seq 2 addr [v2:192.168.122.102:6804/8100770,v1:192.168.122.102:6805/8100770] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.lphril{-1:24197} state up:standby seq 1 addr [v2:192.168.122.100:6806/2465393949,v1:192.168.122.100:6807/2465393949] compat {c=[1],r=[1],i=[1fff]}]
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/8100770,v1:192.168.122.102:6805/8100770] up:active
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2465393949,v1:192.168.122.100:6807/2465393949] up:boot
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.wfaozr=up:active} 1 up:standby
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.lphril"} v 0)
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.lphril"}]: dispatch
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e5 all = 0
Oct  8 05:46:30 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Updating MDS map to version 5 from mon.2
Oct  8 05:46:30 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Monitors have assigned me to become a standby
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e6 new map
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e6 print_map#012e6#012btime 2025-10-08T09:46:30:924934+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-08T09:46:14.191787+0000#012modified#0112025-10-08T09:46:30.899412+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24190}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24190 members: 24190#012[mds.cephfs.compute-2.wfaozr{0:24190} state up:active seq 2 addr [v2:192.168.122.102:6804/8100770,v1:192.168.122.102:6805/8100770] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.lphril{-1:24197} state up:standby seq 1 addr [v2:192.168.122.100:6806/2465393949,v1:192.168.122.100:6807/2465393949] compat {c=[1],r=[1],i=[1fff]}]
Oct  8 05:46:30 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.wfaozr=up:active} 1 up:standby
Oct  8 05:46:31 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Oct  8 05:46:31 np0005475493 rsyslogd[1005]: message too long (16383) with configured size 8096, begin of message is: [{"container_id": "f2b90c859a73", "container_image_digests": ["quay.io/ceph/ceph [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct  8 05:46:31 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Oct  8 05:46:31 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v16: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:31 np0005475493 ceph-mgr[73869]: [progress INFO root] complete: finished ev affb329f-dae8-4723-a1e4-2bc80680611b (Updating mds.cephfs deployment (+3 -> 3))
Oct  8 05:46:31 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event affb329f-dae8-4723-a1e4-2bc80680611b (Updating mds.cephfs deployment (+3 -> 3)) in 5 seconds
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:31 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev 14600416-a126-4524-a7b9-d20314f3302e (Updating nfs.cephfs deployment (+3 -> 3))
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:31 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.lgtqnn
Oct  8 05:46:31 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.lgtqnn
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.lgtqnn", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.lgtqnn", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.lgtqnn", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct  8 05:46:31 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Oct  8 05:46:31 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: Deploying daemon mds.cephfs.compute-1.bumazt on compute-1
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.lgtqnn", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.lgtqnn", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct  8 05:46:31 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct  8 05:46:31 np0005475493 python3[95507]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:46:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Oct  8 05:46:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct  8 05:46:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct  8 05:46:32 np0005475493 podman[95509]: 2025-10-08 09:46:32.036375753 +0000 UTC m=+0.042061622 container create f29aada63d6b5107c6071756bb3981daa8cfa0cf6307df4ba2efaaa1097631b5 (image=quay.io/ceph/ceph:v19, name=angry_cray, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:46:32 np0005475493 systemd[1]: Started libpod-conmon-f29aada63d6b5107c6071756bb3981daa8cfa0cf6307df4ba2efaaa1097631b5.scope.
Oct  8 05:46:32 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:46:32 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/405028fddd60dd945498a838f0b1de70782f90b15ec83e52a16a3af8d2700a59/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:32 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/405028fddd60dd945498a838f0b1de70782f90b15ec83e52a16a3af8d2700a59/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:32 np0005475493 podman[95509]: 2025-10-08 09:46:32.02017245 +0000 UTC m=+0.025858269 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:46:32 np0005475493 podman[95509]: 2025-10-08 09:46:32.131129898 +0000 UTC m=+0.136815747 container init f29aada63d6b5107c6071756bb3981daa8cfa0cf6307df4ba2efaaa1097631b5 (image=quay.io/ceph/ceph:v19, name=angry_cray, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:46:32 np0005475493 podman[95509]: 2025-10-08 09:46:32.139241575 +0000 UTC m=+0.144927374 container start f29aada63d6b5107c6071756bb3981daa8cfa0cf6307df4ba2efaaa1097631b5 (image=quay.io/ceph/ceph:v19, name=angry_cray, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:46:32 np0005475493 podman[95509]: 2025-10-08 09:46:32.142595877 +0000 UTC m=+0.148281726 container attach f29aada63d6b5107c6071756bb3981daa8cfa0cf6307df4ba2efaaa1097631b5 (image=quay.io/ceph/ceph:v19, name=angry_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  8 05:46:32 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Oct  8 05:46:32 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Oct  8 05:46:32 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.lgtqnn-rgw
Oct  8 05:46:32 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.lgtqnn-rgw
Oct  8 05:46:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.lgtqnn-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct  8 05:46:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.lgtqnn-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  8 05:46:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.lgtqnn-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  8 05:46:32 np0005475493 ceph-mgr[73869]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.lgtqnn's ganesha conf is defaulting to empty
Oct  8 05:46:32 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.lgtqnn's ganesha conf is defaulting to empty
Oct  8 05:46:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:46:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:46:32 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.b scrub starts
Oct  8 05:46:32 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.lgtqnn on compute-1
Oct  8 05:46:32 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.lgtqnn on compute-1
Oct  8 05:46:32 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.b scrub ok
Oct  8 05:46:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct  8 05:46:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2732719460' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  8 05:46:32 np0005475493 angry_cray[95540]: 
Oct  8 05:46:32 np0005475493 angry_cray[95540]: {"fsid":"787292cc-8154-50c4-9e00-e9be3e817149","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":81,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":51,"num_osds":3,"num_up_osds":3,"osd_up_since":1759916737,"num_in_osds":3,"osd_in_since":1759916717,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":198}],"num_pgs":198,"num_pools":12,"num_objects":195,"data_bytes":464595,"bytes_used":88997888,"bytes_avail":64322928640,"bytes_total":64411926528,"read_bytes_sec":15014,"write_bytes_sec":0,"read_op_per_sec":4,"write_op_per_sec":1},"fsmap":{"epoch":6,"btime":"2025-10-08T09:46:30:924934+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-2.wfaozr","status":"up:active","gid":24190}],"up:standby":1},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":5,"modified":"2025-10-08T09:45:54.969307+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.ixicfj":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.swlvov":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.mtagwx":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14382":{"start_epoch":5,"start_stamp":"2025-10-08T09:45:54.959975+0000","gid":14382,"addr":"192.168.122.100:0/4157537618","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.wdkdxi","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025","kernel_version":"5.14.0-620.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864104","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"246b4a69-3c1d-47ce-b182-d12a3d96d3e3","zone_name":"default","zonegroup_id":"3218c688-50d3-4b3d-9517-1c08371b4e2e","zonegroup_name":"default"},"task_status":{}},"24146":{"start_epoch":5,"start_stamp":"2025-10-08T09:45:54.963319+0000","gid":24146,"addr":"192.168.122.101:0/1900470648","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.aaugis","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025","kernel_version":"5.14.0-620.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864104","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"246b4a69-3c1d-47ce-b182-d12a3d96d3e3","zone_name":"default","zonegroup_id":"3218c688-50d3-4b3d-9517-1c08371b4e2e","zonegroup_name":"default"},"task_status":{}},"24148":{"start_epoch":5,"start_stamp":"2025-10-08T09:45:54.967024+0000","gid":24148,"addr":"192.168.122.102:0/4200026288","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.pgshil","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025","kernel_version":"5.14.0-620.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864104","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"246b4a69-3c1d-47ce-b182-d12a3d96d3e3","zone_name":"default","zonegroup_id":"3218c688-50d3-4b3d-9517-1c08371b4e2e","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"affb329f-dae8-4723-a1e4-2bc80680611b":{"message":"Updating mds.cephfs deployment (+3 -> 3) (3s)\n      [==================..........] (remaining: 1s)","progress":0.66666668653488159,"add_to_ceph_s":true}}}
Oct  8 05:46:32 np0005475493 systemd[1]: libpod-f29aada63d6b5107c6071756bb3981daa8cfa0cf6307df4ba2efaaa1097631b5.scope: Deactivated successfully.
Oct  8 05:46:32 np0005475493 podman[95584]: 2025-10-08 09:46:32.606707549 +0000 UTC m=+0.021291939 container died f29aada63d6b5107c6071756bb3981daa8cfa0cf6307df4ba2efaaa1097631b5 (image=quay.io/ceph/ceph:v19, name=angry_cray, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:46:32 np0005475493 systemd[1]: var-lib-containers-storage-overlay-405028fddd60dd945498a838f0b1de70782f90b15ec83e52a16a3af8d2700a59-merged.mount: Deactivated successfully.
Oct  8 05:46:32 np0005475493 podman[95584]: 2025-10-08 09:46:32.638537639 +0000 UTC m=+0.053122009 container remove f29aada63d6b5107c6071756bb3981daa8cfa0cf6307df4ba2efaaa1097631b5 (image=quay.io/ceph/ceph:v19, name=angry_cray, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  8 05:46:32 np0005475493 systemd[1]: libpod-conmon-f29aada63d6b5107c6071756bb3981daa8cfa0cf6307df4ba2efaaa1097631b5.scope: Deactivated successfully.
Oct  8 05:46:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e7 new map
Oct  8 05:46:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e7 print_map#012e7#012btime 2025-10-08T09:46:32:835229+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-08T09:46:14.191787+0000#012modified#0112025-10-08T09:46:30.899412+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24190}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24190 members: 24190#012[mds.cephfs.compute-2.wfaozr{0:24190} state up:active seq 2 addr [v2:192.168.122.102:6804/8100770,v1:192.168.122.102:6805/8100770] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.lphril{-1:24197} state up:standby seq 1 addr [v2:192.168.122.100:6806/2465393949,v1:192.168.122.100:6807/2465393949] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.bumazt{-1:24206} state up:standby seq 1 addr [v2:192.168.122.101:6804/2344502191,v1:192.168.122.101:6805/2344502191] compat {c=[1],r=[1],i=[1fff]}]
Oct  8 05:46:32 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2344502191,v1:192.168.122.101:6805/2344502191] up:boot
Oct  8 05:46:32 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.wfaozr=up:active} 2 up:standby
Oct  8 05:46:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.bumazt"} v 0)
Oct  8 05:46:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.bumazt"}]: dispatch
Oct  8 05:46:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e7 all = 0
Oct  8 05:46:32 np0005475493 ceph-mon[73572]: Creating key for client.nfs.cephfs.0.0.compute-1.lgtqnn
Oct  8 05:46:32 np0005475493 ceph-mon[73572]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Oct  8 05:46:32 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct  8 05:46:32 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct  8 05:46:32 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct  8 05:46:32 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.lgtqnn-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  8 05:46:32 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.lgtqnn-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  8 05:46:33 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v17: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:46:33 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Oct  8 05:46:33 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Oct  8 05:46:33 np0005475493 ceph-mgr[73869]: [progress INFO root] Writing back 14 completed events
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:46:33 np0005475493 python3[95624]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:33 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.ettfma
Oct  8 05:46:33 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.ettfma
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ettfma", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ettfma", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ettfma", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct  8 05:46:33 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Oct  8 05:46:33 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:46:33 np0005475493 podman[95625]: 2025-10-08 09:46:33.748538837 +0000 UTC m=+0.037132571 container create c8e6387019056b13a2e71f4292cda0b4c7ce7e5f1aded7843cabbcfa25a37086 (image=quay.io/ceph/ceph:v19, name=crazy_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:46:33 np0005475493 systemd[1]: Started libpod-conmon-c8e6387019056b13a2e71f4292cda0b4c7ce7e5f1aded7843cabbcfa25a37086.scope.
Oct  8 05:46:33 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:46:33 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee532264497ed1302464d3b0f684ee155a3bb24fcfdebd2176ace256ab3bd67/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:33 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee532264497ed1302464d3b0f684ee155a3bb24fcfdebd2176ace256ab3bd67/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:33 np0005475493 podman[95625]: 2025-10-08 09:46:33.821474318 +0000 UTC m=+0.110068092 container init c8e6387019056b13a2e71f4292cda0b4c7ce7e5f1aded7843cabbcfa25a37086 (image=quay.io/ceph/ceph:v19, name=crazy_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Oct  8 05:46:33 np0005475493 podman[95625]: 2025-10-08 09:46:33.732673654 +0000 UTC m=+0.021267408 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:46:33 np0005475493 podman[95625]: 2025-10-08 09:46:33.828243814 +0000 UTC m=+0.116837558 container start c8e6387019056b13a2e71f4292cda0b4c7ce7e5f1aded7843cabbcfa25a37086 (image=quay.io/ceph/ceph:v19, name=crazy_blackburn, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:46:33 np0005475493 podman[95625]: 2025-10-08 09:46:33.831172033 +0000 UTC m=+0.119765777 container attach c8e6387019056b13a2e71f4292cda0b4c7ce7e5f1aded7843cabbcfa25a37086 (image=quay.io/ceph/ceph:v19, name=crazy_blackburn, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: Rados config object exists: conf-nfs.cephfs
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: Creating key for client.nfs.cephfs.0.0.compute-1.lgtqnn-rgw
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: Bind address in nfs.cephfs.0.0.compute-1.lgtqnn's ganesha conf is defaulting to empty
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: Deploying daemon nfs.cephfs.0.0.compute-1.lgtqnn on compute-1
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ettfma", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ettfma", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct  8 05:46:33 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct  8 05:46:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct  8 05:46:34 np0005475493 crazy_blackburn[95641]: 
Oct  8 05:46:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3055594015' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  8 05:46:34 np0005475493 crazy_blackburn[95641]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.ixicfj/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-1.swlvov/server_addr","value":"192.168.122.101","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-2.mtagwx/server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.wdkdxi","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.aaugis","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.pgshil","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Oct  8 05:46:34 np0005475493 systemd[1]: libpod-c8e6387019056b13a2e71f4292cda0b4c7ce7e5f1aded7843cabbcfa25a37086.scope: Deactivated successfully.
Oct  8 05:46:34 np0005475493 podman[95625]: 2025-10-08 09:46:34.194425874 +0000 UTC m=+0.483019608 container died c8e6387019056b13a2e71f4292cda0b4c7ce7e5f1aded7843cabbcfa25a37086 (image=quay.io/ceph/ceph:v19, name=crazy_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  8 05:46:34 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Oct  8 05:46:34 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Oct  8 05:46:34 np0005475493 systemd[1]: var-lib-containers-storage-overlay-dee532264497ed1302464d3b0f684ee155a3bb24fcfdebd2176ace256ab3bd67-merged.mount: Deactivated successfully.
Oct  8 05:46:34 np0005475493 podman[95625]: 2025-10-08 09:46:34.516832731 +0000 UTC m=+0.805426465 container remove c8e6387019056b13a2e71f4292cda0b4c7ce7e5f1aded7843cabbcfa25a37086 (image=quay.io/ceph/ceph:v19, name=crazy_blackburn, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  8 05:46:34 np0005475493 systemd[1]: libpod-conmon-c8e6387019056b13a2e71f4292cda0b4c7ce7e5f1aded7843cabbcfa25a37086.scope: Deactivated successfully.
Oct  8 05:46:34 np0005475493 ceph-mon[73572]: Creating key for client.nfs.cephfs.1.0.compute-2.ettfma
Oct  8 05:46:34 np0005475493 ceph-mon[73572]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Oct  8 05:46:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e8 new map
Oct  8 05:46:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e8 print_map#012e8#012btime 2025-10-08T09:46:34:982221+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-08T09:46:14.191787+0000#012modified#0112025-10-08T09:46:34.011128+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24190}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24190 members: 24190#012[mds.cephfs.compute-2.wfaozr{0:24190} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/8100770,v1:192.168.122.102:6805/8100770] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.lphril{-1:24197} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2465393949,v1:192.168.122.100:6807/2465393949] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.bumazt{-1:24206} state up:standby seq 1 addr [v2:192.168.122.101:6804/2344502191,v1:192.168.122.101:6805/2344502191] compat {c=[1],r=[1],i=[1fff]}]
Oct  8 05:46:34 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Updating MDS map to version 8 from mon.2
Oct  8 05:46:34 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/8100770,v1:192.168.122.102:6805/8100770] up:active
Oct  8 05:46:34 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2465393949,v1:192.168.122.100:6807/2465393949] up:standby
Oct  8 05:46:34 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.wfaozr=up:active} 2 up:standby
Oct  8 05:46:35 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v18: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.8 KiB/s wr, 5 op/s
Oct  8 05:46:35 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Oct  8 05:46:35 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Oct  8 05:46:35 np0005475493 python3[95718]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:46:35 np0005475493 podman[95719]: 2025-10-08 09:46:35.714703056 +0000 UTC m=+0.043067593 container create 48dbdc81eb3f06b5c101de6a0cf0b7c49c31b855f7570adfbd6584cd293d9516 (image=quay.io/ceph/ceph:v19, name=charming_dhawan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct  8 05:46:35 np0005475493 systemd[1]: Started libpod-conmon-48dbdc81eb3f06b5c101de6a0cf0b7c49c31b855f7570adfbd6584cd293d9516.scope.
Oct  8 05:46:35 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:46:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e52d31dee7817cf12675cf4d4aea194511bb430398f9d0e8288f40296ce8cb85/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e52d31dee7817cf12675cf4d4aea194511bb430398f9d0e8288f40296ce8cb85/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:35 np0005475493 podman[95719]: 2025-10-08 09:46:35.776848538 +0000 UTC m=+0.105213085 container init 48dbdc81eb3f06b5c101de6a0cf0b7c49c31b855f7570adfbd6584cd293d9516 (image=quay.io/ceph/ceph:v19, name=charming_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:46:35 np0005475493 podman[95719]: 2025-10-08 09:46:35.782620504 +0000 UTC m=+0.110985041 container start 48dbdc81eb3f06b5c101de6a0cf0b7c49c31b855f7570adfbd6584cd293d9516 (image=quay.io/ceph/ceph:v19, name=charming_dhawan, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:46:35 np0005475493 podman[95719]: 2025-10-08 09:46:35.785691288 +0000 UTC m=+0.114055825 container attach 48dbdc81eb3f06b5c101de6a0cf0b7c49c31b855f7570adfbd6584cd293d9516 (image=quay.io/ceph/ceph:v19, name=charming_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:46:35 np0005475493 podman[95719]: 2025-10-08 09:46:35.696203113 +0000 UTC m=+0.024567670 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:46:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e9 new map
Oct  8 05:46:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e9 print_map#012e9#012btime 2025-10-08T09:46:35:988720+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-08T09:46:14.191787+0000#012modified#0112025-10-08T09:46:34.011128+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24190}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24190 members: 24190#012[mds.cephfs.compute-2.wfaozr{0:24190} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/8100770,v1:192.168.122.102:6805/8100770] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.lphril{-1:24197} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2465393949,v1:192.168.122.100:6807/2465393949] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.bumazt{-1:24206} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2344502191,v1:192.168.122.101:6805/2344502191] compat {c=[1],r=[1],i=[1fff]}]
Oct  8 05:46:36 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2344502191,v1:192.168.122.101:6805/2344502191] up:standby
Oct  8 05:46:36 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.wfaozr=up:active} 2 up:standby
Oct  8 05:46:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Oct  8 05:46:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3522787505' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct  8 05:46:36 np0005475493 charming_dhawan[95734]: mimic
Oct  8 05:46:36 np0005475493 systemd[1]: libpod-48dbdc81eb3f06b5c101de6a0cf0b7c49c31b855f7570adfbd6584cd293d9516.scope: Deactivated successfully.
Oct  8 05:46:36 np0005475493 podman[95719]: 2025-10-08 09:46:36.173829966 +0000 UTC m=+0.502194503 container died 48dbdc81eb3f06b5c101de6a0cf0b7c49c31b855f7570adfbd6584cd293d9516 (image=quay.io/ceph/ceph:v19, name=charming_dhawan, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:46:36 np0005475493 systemd[1]: var-lib-containers-storage-overlay-e52d31dee7817cf12675cf4d4aea194511bb430398f9d0e8288f40296ce8cb85-merged.mount: Deactivated successfully.
Oct  8 05:46:36 np0005475493 podman[95719]: 2025-10-08 09:46:36.205891813 +0000 UTC m=+0.534256350 container remove 48dbdc81eb3f06b5c101de6a0cf0b7c49c31b855f7570adfbd6584cd293d9516 (image=quay.io/ceph/ceph:v19, name=charming_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:46:36 np0005475493 systemd[1]: libpod-conmon-48dbdc81eb3f06b5c101de6a0cf0b7c49c31b855f7570adfbd6584cd293d9516.scope: Deactivated successfully.
Oct  8 05:46:36 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.e scrub starts
Oct  8 05:46:36 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 7.e scrub ok
Oct  8 05:46:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Oct  8 05:46:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct  8 05:46:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct  8 05:46:36 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Oct  8 05:46:36 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Oct  8 05:46:36 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.ettfma-rgw
Oct  8 05:46:36 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.ettfma-rgw
Oct  8 05:46:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ettfma-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct  8 05:46:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ettfma-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  8 05:46:37 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ettfma-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  8 05:46:37 np0005475493 ceph-mgr[73869]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.ettfma's ganesha conf is defaulting to empty
Oct  8 05:46:37 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.ettfma's ganesha conf is defaulting to empty
Oct  8 05:46:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:46:37 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:46:37 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.ettfma on compute-2
Oct  8 05:46:37 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.ettfma on compute-2
Oct  8 05:46:37 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct  8 05:46:37 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct  8 05:46:37 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ettfma-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  8 05:46:37 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v19: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.8 KiB/s wr, 5 op/s
Oct  8 05:46:37 np0005475493 python3[95814]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:46:37 np0005475493 podman[95815]: 2025-10-08 09:46:37.356381985 +0000 UTC m=+0.055772459 container create d410df99f33c1beca74fed458576db8fb4ff41491a02ed72b08d3230a2ed46a0 (image=quay.io/ceph/ceph:v19, name=naughty_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:46:37 np0005475493 systemd[1]: Started libpod-conmon-d410df99f33c1beca74fed458576db8fb4ff41491a02ed72b08d3230a2ed46a0.scope.
Oct  8 05:46:37 np0005475493 podman[95815]: 2025-10-08 09:46:37.325899987 +0000 UTC m=+0.025290481 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:46:37 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:46:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/669e0ba714431ae0590734bb9575a5e0686a3fa89e15a558f918b626ed2ef2ea/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/669e0ba714431ae0590734bb9575a5e0686a3fa89e15a558f918b626ed2ef2ea/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:37 np0005475493 podman[95815]: 2025-10-08 09:46:37.453251484 +0000 UTC m=+0.152641978 container init d410df99f33c1beca74fed458576db8fb4ff41491a02ed72b08d3230a2ed46a0 (image=quay.io/ceph/ceph:v19, name=naughty_diffie, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:46:37 np0005475493 podman[95815]: 2025-10-08 09:46:37.458260487 +0000 UTC m=+0.157650961 container start d410df99f33c1beca74fed458576db8fb4ff41491a02ed72b08d3230a2ed46a0 (image=quay.io/ceph/ceph:v19, name=naughty_diffie, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:46:37 np0005475493 podman[95815]: 2025-10-08 09:46:37.473016766 +0000 UTC m=+0.172407290 container attach d410df99f33c1beca74fed458576db8fb4ff41491a02ed72b08d3230a2ed46a0 (image=quay.io/ceph/ceph:v19, name=naughty_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  8 05:46:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Oct  8 05:46:37 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1901290417' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct  8 05:46:37 np0005475493 naughty_diffie[95830]: 
Oct  8 05:46:37 np0005475493 systemd[1]: libpod-d410df99f33c1beca74fed458576db8fb4ff41491a02ed72b08d3230a2ed46a0.scope: Deactivated successfully.
Oct  8 05:46:37 np0005475493 naughty_diffie[95830]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mds":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"rgw":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":15}}
Oct  8 05:46:37 np0005475493 podman[95815]: 2025-10-08 09:46:37.90663963 +0000 UTC m=+0.606030104 container died d410df99f33c1beca74fed458576db8fb4ff41491a02ed72b08d3230a2ed46a0 (image=quay.io/ceph/ceph:v19, name=naughty_diffie, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct  8 05:46:37 np0005475493 systemd[1]: var-lib-containers-storage-overlay-669e0ba714431ae0590734bb9575a5e0686a3fa89e15a558f918b626ed2ef2ea-merged.mount: Deactivated successfully.
Oct  8 05:46:37 np0005475493 podman[95815]: 2025-10-08 09:46:37.986361618 +0000 UTC m=+0.685752092 container remove d410df99f33c1beca74fed458576db8fb4ff41491a02ed72b08d3230a2ed46a0 (image=quay.io/ceph/ceph:v19, name=naughty_diffie, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:46:37 np0005475493 systemd[1]: libpod-conmon-d410df99f33c1beca74fed458576db8fb4ff41491a02ed72b08d3230a2ed46a0.scope: Deactivated successfully.
Oct  8 05:46:38 np0005475493 ceph-mon[73572]: Rados config object exists: conf-nfs.cephfs
Oct  8 05:46:38 np0005475493 ceph-mon[73572]: Creating key for client.nfs.cephfs.1.0.compute-2.ettfma-rgw
Oct  8 05:46:38 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ettfma-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  8 05:46:38 np0005475493 ceph-mon[73572]: Bind address in nfs.cephfs.1.0.compute-2.ettfma's ganesha conf is defaulting to empty
Oct  8 05:46:38 np0005475493 ceph-mon[73572]: Deploying daemon nfs.cephfs.1.0.compute-2.ettfma on compute-2
Oct  8 05:46:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:46:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:46:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:46:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 05:46:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:38 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.uynkmx
Oct  8 05:46:38 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.uynkmx
Oct  8 05:46:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.uynkmx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Oct  8 05:46:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.uynkmx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct  8 05:46:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.uynkmx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct  8 05:46:38 np0005475493 ceph-mgr[73869]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Oct  8 05:46:38 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Oct  8 05:46:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Oct  8 05:46:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct  8 05:46:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct  8 05:46:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:46:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:46:39 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v20: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.8 KiB/s wr, 5 op/s
Oct  8 05:46:39 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:39 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:39 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:39 np0005475493 ceph-mon[73572]: Creating key for client.nfs.cephfs.2.0.compute-0.uynkmx
Oct  8 05:46:39 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.uynkmx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct  8 05:46:39 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.uynkmx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct  8 05:46:39 np0005475493 ceph-mon[73572]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Oct  8 05:46:39 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct  8 05:46:39 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct  8 05:46:41 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v21: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 2.6 KiB/s wr, 7 op/s
Oct  8 05:46:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Oct  8 05:46:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct  8 05:46:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct  8 05:46:42 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Oct  8 05:46:42 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Oct  8 05:46:42 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.uynkmx-rgw
Oct  8 05:46:42 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.uynkmx-rgw
Oct  8 05:46:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.uynkmx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct  8 05:46:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.uynkmx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  8 05:46:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.uynkmx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  8 05:46:42 np0005475493 ceph-mgr[73869]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.uynkmx's ganesha conf is defaulting to empty
Oct  8 05:46:42 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.uynkmx's ganesha conf is defaulting to empty
Oct  8 05:46:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:46:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:46:42 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.uynkmx on compute-0
Oct  8 05:46:42 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.uynkmx on compute-0
Oct  8 05:46:42 np0005475493 podman[95995]: 2025-10-08 09:46:42.976558377 +0000 UTC m=+0.042664730 container create 7747f35209568b363e9b36dfd9fe1745105c66ff3c683c2b49686912cf457dbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_wiles, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:46:43 np0005475493 systemd[1]: Started libpod-conmon-7747f35209568b363e9b36dfd9fe1745105c66ff3c683c2b49686912cf457dbd.scope.
Oct  8 05:46:43 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:46:43 np0005475493 podman[95995]: 2025-10-08 09:46:43.032939224 +0000 UTC m=+0.099045587 container init 7747f35209568b363e9b36dfd9fe1745105c66ff3c683c2b49686912cf457dbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_wiles, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  8 05:46:43 np0005475493 podman[95995]: 2025-10-08 09:46:43.039383861 +0000 UTC m=+0.105490224 container start 7747f35209568b363e9b36dfd9fe1745105c66ff3c683c2b49686912cf457dbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct  8 05:46:43 np0005475493 podman[95995]: 2025-10-08 09:46:43.042944819 +0000 UTC m=+0.109051182 container attach 7747f35209568b363e9b36dfd9fe1745105c66ff3c683c2b49686912cf457dbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_wiles, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True)
Oct  8 05:46:43 np0005475493 sweet_wiles[96011]: 167 167
Oct  8 05:46:43 np0005475493 systemd[1]: libpod-7747f35209568b363e9b36dfd9fe1745105c66ff3c683c2b49686912cf457dbd.scope: Deactivated successfully.
Oct  8 05:46:43 np0005475493 podman[95995]: 2025-10-08 09:46:43.044160786 +0000 UTC m=+0.110267139 container died 7747f35209568b363e9b36dfd9fe1745105c66ff3c683c2b49686912cf457dbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_wiles, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:46:43 np0005475493 podman[95995]: 2025-10-08 09:46:42.953721442 +0000 UTC m=+0.019827885 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:46:43 np0005475493 systemd[1]: var-lib-containers-storage-overlay-6bde36b4e8d9c05736ee9178d6e5f42caf96d46a0bd18b8bfaacd0c30668d9f3-merged.mount: Deactivated successfully.
Oct  8 05:46:43 np0005475493 podman[95995]: 2025-10-08 09:46:43.080394779 +0000 UTC m=+0.146501152 container remove 7747f35209568b363e9b36dfd9fe1745105c66ff3c683c2b49686912cf457dbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_wiles, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  8 05:46:43 np0005475493 systemd[1]: libpod-conmon-7747f35209568b363e9b36dfd9fe1745105c66ff3c683c2b49686912cf457dbd.scope: Deactivated successfully.
Oct  8 05:46:43 np0005475493 systemd[1]: Reloading.
Oct  8 05:46:43 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v22: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 2.6 KiB/s wr, 7 op/s
Oct  8 05:46:43 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:46:43 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:46:43 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:46:43 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:46:43 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:46:43 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:46:43 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:46:43 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:46:43 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct  8 05:46:43 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct  8 05:46:43 np0005475493 ceph-mon[73572]: Rados config object exists: conf-nfs.cephfs
Oct  8 05:46:43 np0005475493 ceph-mon[73572]: Creating key for client.nfs.cephfs.2.0.compute-0.uynkmx-rgw
Oct  8 05:46:43 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.uynkmx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  8 05:46:43 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.uynkmx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  8 05:46:43 np0005475493 ceph-mon[73572]: Bind address in nfs.cephfs.2.0.compute-0.uynkmx's ganesha conf is defaulting to empty
Oct  8 05:46:43 np0005475493 ceph-mon[73572]: Deploying daemon nfs.cephfs.2.0.compute-0.uynkmx on compute-0
Oct  8 05:46:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:46:43 np0005475493 systemd[1]: Reloading.
Oct  8 05:46:43 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:46:43 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:46:43 np0005475493 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:46:43 np0005475493 podman[96152]: 2025-10-08 09:46:43.889779075 +0000 UTC m=+0.052667565 container create c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:46:43 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f542fbc76345914e50b0a692320404ddade2bba14cf57cdbb4a6cefc867b9d7e/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:43 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f542fbc76345914e50b0a692320404ddade2bba14cf57cdbb4a6cefc867b9d7e/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:43 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f542fbc76345914e50b0a692320404ddade2bba14cf57cdbb4a6cefc867b9d7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:43 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f542fbc76345914e50b0a692320404ddade2bba14cf57cdbb4a6cefc867b9d7e/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.uynkmx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:43 np0005475493 podman[96152]: 2025-10-08 09:46:43.939765497 +0000 UTC m=+0.102653977 container init c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:46:43 np0005475493 podman[96152]: 2025-10-08 09:46:43.861809453 +0000 UTC m=+0.024698043 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:46:43 np0005475493 podman[96152]: 2025-10-08 09:46:43.956352342 +0000 UTC m=+0.119240812 container start c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:46:43 np0005475493 bash[96152]: c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc
Oct  8 05:46:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:43 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  8 05:46:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:43 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  8 05:46:43 np0005475493 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:46:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  8 05:46:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  8 05:46:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  8 05:46:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  8 05:46:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  8 05:46:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:46:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:46:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:46:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 05:46:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:44 np0005475493 ceph-mgr[73869]: [progress INFO root] complete: finished ev 14600416-a126-4524-a7b9-d20314f3302e (Updating nfs.cephfs deployment (+3 -> 3))
Oct  8 05:46:44 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event 14600416-a126-4524-a7b9-d20314f3302e (Updating nfs.cephfs deployment (+3 -> 3)) in 12 seconds
Oct  8 05:46:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 05:46:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:44 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev 7ba10d6d-35d7-417a-acf8-1cda7124e4f2 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Oct  8 05:46:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Oct  8 05:46:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:44 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.mmphxo on compute-1
Oct  8 05:46:44 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.mmphxo on compute-1
Oct  8 05:46:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct  8 05:46:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct  8 05:46:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 05:46:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:46:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 05:46:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:46:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 05:46:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:46:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 05:46:45 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:45 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:45 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:45 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:45 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct  8 05:46:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  8 05:46:45 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v23: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 2.7 KiB/s wr, 7 op/s
Oct  8 05:46:46 np0005475493 ceph-mon[73572]: Deploying daemon haproxy.nfs.cephfs.compute-1.mmphxo on compute-1
Oct  8 05:46:47 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v24: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct  8 05:46:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:46:48 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:46:48 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct  8 05:46:48 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:48 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.cwhopp on compute-0
Oct  8 05:46:48 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.cwhopp on compute-0
Oct  8 05:46:48 np0005475493 ceph-mgr[73869]: [progress INFO root] Writing back 15 completed events
Oct  8 05:46:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  8 05:46:48 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:46:49 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:49 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:49 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:49 np0005475493 ceph-mon[73572]: Deploying daemon haproxy.nfs.cephfs.compute-0.cwhopp on compute-0
Oct  8 05:46:49 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:49 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v25: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct  8 05:46:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:49 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6630000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:46:50 np0005475493 podman[96312]: 2025-10-08 09:46:50.819916373 +0000 UTC m=+2.157150295 container create f5918308668a45e3fa22452b7c04f539c9749d85acbc4cf563aaa484bc35bfac (image=quay.io/ceph/haproxy:2.3, name=cranky_black)
Oct  8 05:46:50 np0005475493 systemd[1]: Started libpod-conmon-f5918308668a45e3fa22452b7c04f539c9749d85acbc4cf563aaa484bc35bfac.scope.
Oct  8 05:46:50 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:46:50 np0005475493 podman[96312]: 2025-10-08 09:46:50.805411145 +0000 UTC m=+2.142645087 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct  8 05:46:50 np0005475493 podman[96312]: 2025-10-08 09:46:50.889944172 +0000 UTC m=+2.227178164 container init f5918308668a45e3fa22452b7c04f539c9749d85acbc4cf563aaa484bc35bfac (image=quay.io/ceph/haproxy:2.3, name=cranky_black)
Oct  8 05:46:50 np0005475493 podman[96312]: 2025-10-08 09:46:50.896909551 +0000 UTC m=+2.234143473 container start f5918308668a45e3fa22452b7c04f539c9749d85acbc4cf563aaa484bc35bfac (image=quay.io/ceph/haproxy:2.3, name=cranky_black)
Oct  8 05:46:50 np0005475493 podman[96312]: 2025-10-08 09:46:50.899818892 +0000 UTC m=+2.237052875 container attach f5918308668a45e3fa22452b7c04f539c9749d85acbc4cf563aaa484bc35bfac (image=quay.io/ceph/haproxy:2.3, name=cranky_black)
Oct  8 05:46:50 np0005475493 cranky_black[96429]: 0 0
Oct  8 05:46:50 np0005475493 systemd[1]: libpod-f5918308668a45e3fa22452b7c04f539c9749d85acbc4cf563aaa484bc35bfac.scope: Deactivated successfully.
Oct  8 05:46:50 np0005475493 conmon[96429]: conmon f5918308668a45e3fa22 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f5918308668a45e3fa22452b7c04f539c9749d85acbc4cf563aaa484bc35bfac.scope/container/memory.events
Oct  8 05:46:50 np0005475493 podman[96312]: 2025-10-08 09:46:50.90353047 +0000 UTC m=+2.240764422 container died f5918308668a45e3fa22452b7c04f539c9749d85acbc4cf563aaa484bc35bfac (image=quay.io/ceph/haproxy:2.3, name=cranky_black)
Oct  8 05:46:50 np0005475493 systemd[1]: var-lib-containers-storage-overlay-b09d3b0c91e05dd0434d2289ae81e908858322c91af69cd400ecb3ef743548fc-merged.mount: Deactivated successfully.
Oct  8 05:46:50 np0005475493 podman[96312]: 2025-10-08 09:46:50.947513647 +0000 UTC m=+2.284747569 container remove f5918308668a45e3fa22452b7c04f539c9749d85acbc4cf563aaa484bc35bfac (image=quay.io/ceph/haproxy:2.3, name=cranky_black)
Oct  8 05:46:50 np0005475493 systemd[1]: libpod-conmon-f5918308668a45e3fa22452b7c04f539c9749d85acbc4cf563aaa484bc35bfac.scope: Deactivated successfully.
Oct  8 05:46:51 np0005475493 systemd[1]: Reloading.
Oct  8 05:46:51 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:46:51 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:46:51 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v26: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 6 op/s
Oct  8 05:46:51 np0005475493 systemd[1]: Reloading.
Oct  8 05:46:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:51 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:46:51 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:46:51 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:46:51 np0005475493 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.cwhopp for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:46:51 np0005475493 podman[96573]: 2025-10-08 09:46:51.813141102 +0000 UTC m=+0.040536630 container create 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct  8 05:46:51 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9206325a51650bd28386027213364efa621af0c7b19bb1a2c2c16eac6fec86/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct  8 05:46:51 np0005475493 podman[96573]: 2025-10-08 09:46:51.865189744 +0000 UTC m=+0.092585272 container init 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct  8 05:46:51 np0005475493 podman[96573]: 2025-10-08 09:46:51.871668298 +0000 UTC m=+0.099063816 container start 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct  8 05:46:51 np0005475493 bash[96573]: 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5
Oct  8 05:46:51 np0005475493 podman[96573]: 2025-10-08 09:46:51.79151934 +0000 UTC m=+0.018914928 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct  8 05:46:51 np0005475493 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.cwhopp for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:46:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [NOTICE] 280/094651 (2) : New worker #1 (4) forked
Oct  8 05:46:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/094651 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 05:46:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:46:51 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:46:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct  8 05:46:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:52 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.jzsqfr on compute-2
Oct  8 05:46:52 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.jzsqfr on compute-2
Oct  8 05:46:52 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:52 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:52 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:52 np0005475493 ceph-mon[73572]: Deploying daemon haproxy.nfs.cephfs.compute-2.jzsqfr on compute-2
Oct  8 05:46:53 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v27: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct  8 05:46:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:53 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:46:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:46:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:53 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:46:55 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v28: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct  8 05:46:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:55 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:46:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:55 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:46:57 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v29: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 05:46:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:57 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:46:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:57 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66080016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:46:57 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:46:57 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:57 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:46:57 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:57 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct  8 05:46:57 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:57 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Oct  8 05:46:58 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:58 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  8 05:46:58 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  8 05:46:58 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  8 05:46:58 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  8 05:46:58 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  8 05:46:58 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  8 05:46:58 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.ekerbw on compute-0
Oct  8 05:46:58 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.ekerbw on compute-0
Oct  8 05:46:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:46:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:58 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:46:58 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:58 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:58 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:58 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:46:58 np0005475493 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  8 05:46:58 np0005475493 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  8 05:46:58 np0005475493 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  8 05:46:58 np0005475493 ceph-mon[73572]: Deploying daemon keepalived.nfs.cephfs.compute-0.ekerbw on compute-0
Oct  8 05:46:59 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v30: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 05:46:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:59 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:46:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:46:59 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:00 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:01 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v31: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 938 B/s wr, 4 op/s
Oct  8 05:47:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:01 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66080016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:01 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:01 np0005475493 podman[96690]: 2025-10-08 09:47:01.834269282 +0000 UTC m=+3.301915514 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct  8 05:47:01 np0005475493 podman[96690]: 2025-10-08 09:47:01.865340872 +0000 UTC m=+3.332987054 container create c2bd802e74e67419ffaaea5616c7da5fe7dcac4b6d09fb4741e073959b03646c (image=quay.io/ceph/keepalived:2.2.4, name=objective_curran, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, version=2.2.4, io.buildah.version=1.28.2, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.openshift.expose-services=, name=keepalived, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, distribution-scope=public)
Oct  8 05:47:01 np0005475493 systemd[1]: Started libpod-conmon-c2bd802e74e67419ffaaea5616c7da5fe7dcac4b6d09fb4741e073959b03646c.scope.
Oct  8 05:47:01 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:47:01 np0005475493 podman[96690]: 2025-10-08 09:47:01.992523294 +0000 UTC m=+3.460169526 container init c2bd802e74e67419ffaaea5616c7da5fe7dcac4b6d09fb4741e073959b03646c (image=quay.io/ceph/keepalived:2.2.4, name=objective_curran, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, name=keepalived, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20)
Oct  8 05:47:02 np0005475493 podman[96690]: 2025-10-08 09:47:02.004016707 +0000 UTC m=+3.471662859 container start c2bd802e74e67419ffaaea5616c7da5fe7dcac4b6d09fb4741e073959b03646c (image=quay.io/ceph/keepalived:2.2.4, name=objective_curran, vcs-type=git, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, version=2.2.4, io.buildah.version=1.28.2, description=keepalived for Ceph)
Oct  8 05:47:02 np0005475493 podman[96690]: 2025-10-08 09:47:02.008221969 +0000 UTC m=+3.475868211 container attach c2bd802e74e67419ffaaea5616c7da5fe7dcac4b6d09fb4741e073959b03646c (image=quay.io/ceph/keepalived:2.2.4, name=objective_curran, com.redhat.component=keepalived-container, name=keepalived, io.buildah.version=1.28.2, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, version=2.2.4, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=)
Oct  8 05:47:02 np0005475493 objective_curran[96785]: 0 0
Oct  8 05:47:02 np0005475493 systemd[1]: libpod-c2bd802e74e67419ffaaea5616c7da5fe7dcac4b6d09fb4741e073959b03646c.scope: Deactivated successfully.
Oct  8 05:47:02 np0005475493 podman[96690]: 2025-10-08 09:47:02.014924091 +0000 UTC m=+3.482570263 container died c2bd802e74e67419ffaaea5616c7da5fe7dcac4b6d09fb4741e073959b03646c (image=quay.io/ceph/keepalived:2.2.4, name=objective_curran, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, release=1793, description=keepalived for Ceph, architecture=x86_64, vcs-type=git, distribution-scope=public, name=keepalived, version=2.2.4, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9)
Oct  8 05:47:02 np0005475493 systemd[1]: var-lib-containers-storage-overlay-0f7be7913524650d53c677bb2242b3058408f5bcd1bfaaea024d8d313d34c73f-merged.mount: Deactivated successfully.
Oct  8 05:47:02 np0005475493 podman[96690]: 2025-10-08 09:47:02.054941172 +0000 UTC m=+3.522587324 container remove c2bd802e74e67419ffaaea5616c7da5fe7dcac4b6d09fb4741e073959b03646c (image=quay.io/ceph/keepalived:2.2.4, name=objective_curran, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, version=2.2.4, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container)
Oct  8 05:47:02 np0005475493 systemd[1]: libpod-conmon-c2bd802e74e67419ffaaea5616c7da5fe7dcac4b6d09fb4741e073959b03646c.scope: Deactivated successfully.
Oct  8 05:47:02 np0005475493 systemd[1]: Reloading.
Oct  8 05:47:02 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:47:02 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:47:02 np0005475493 systemd[1]: Reloading.
Oct  8 05:47:02 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:47:02 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:47:02 np0005475493 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.ekerbw for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:47:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:02 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:02 np0005475493 podman[96933]: 2025-10-08 09:47:02.917270603 +0000 UTC m=+0.038658780 container create 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, io.buildah.version=1.28.2, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, version=2.2.4, name=keepalived, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, vendor=Red Hat, Inc., description=keepalived for Ceph)
Oct  8 05:47:02 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf608840db6bd57f42ef4334d86738f1b72c5b69cdf5dc1a5e13b649cc13a302/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:02 np0005475493 podman[96933]: 2025-10-08 09:47:02.971864896 +0000 UTC m=+0.093253093 container init 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., description=keepalived for Ceph, com.redhat.component=keepalived-container, release=1793, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, architecture=x86_64, io.openshift.tags=Ceph keepalived, distribution-scope=public, name=keepalived, io.buildah.version=1.28.2)
Oct  8 05:47:02 np0005475493 podman[96933]: 2025-10-08 09:47:02.976434769 +0000 UTC m=+0.097822946 container start 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=keepalived for Ceph, release=1793, version=2.2.4, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.openshift.tags=Ceph keepalived)
Oct  8 05:47:02 np0005475493 bash[96933]: 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d
Oct  8 05:47:02 np0005475493 podman[96933]: 2025-10-08 09:47:02.901967171 +0000 UTC m=+0.023355368 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct  8 05:47:02 np0005475493 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.ekerbw for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:47:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:02 2025: Starting Keepalived v2.2.4 (08/21,2021)
Oct  8 05:47:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:02 2025: Running on Linux 5.14.0-620.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025 (built for Linux 5.14.0)
Oct  8 05:47:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:02 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Oct  8 05:47:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:02 2025: Configuration file /etc/keepalived/keepalived.conf
Oct  8 05:47:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:02 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct  8 05:47:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:02 2025: Starting VRRP child process, pid=4
Oct  8 05:47:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:02 2025: Startup complete
Oct  8 05:47:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:03 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:47:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:03 2025: (VI_0) Entering BACKUP STATE (init)
Oct  8 05:47:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:03 2025: VRRP_Script(check_backend) succeeded
Oct  8 05:47:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:47:03 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:47:03 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct  8 05:47:03 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:03 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  8 05:47:03 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  8 05:47:03 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  8 05:47:03 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  8 05:47:03 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  8 05:47:03 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  8 05:47:03 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.bmcbib on compute-2
Oct  8 05:47:03 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.bmcbib on compute-2
Oct  8 05:47:03 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v32: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 05:47:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:03 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:47:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:03 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:04 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:04 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:04 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:04 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:05 np0005475493 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  8 05:47:05 np0005475493 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  8 05:47:05 np0005475493 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  8 05:47:05 np0005475493 ceph-mon[73572]: Deploying daemon keepalived.nfs.cephfs.compute-2.bmcbib on compute-2
Oct  8 05:47:05 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v33: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct  8 05:47:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:05 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:05 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:06 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 05:47:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:06 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:47:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:06 2025: (VI_0) Entering MASTER STATE
Oct  8 05:47:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:06 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:07 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v34: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct  8 05:47:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:07 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:07 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:07 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:47:07 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:07 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:47:07 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:07 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct  8 05:47:07 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:07 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  8 05:47:07 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  8 05:47:07 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  8 05:47:07 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  8 05:47:07 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  8 05:47:07 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  8 05:47:07 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.sbjzmp on compute-1
Oct  8 05:47:07 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.sbjzmp on compute-1
Oct  8 05:47:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:47:08 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:08 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:08 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:08 np0005475493 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  8 05:47:08 np0005475493 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  8 05:47:08 np0005475493 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  8 05:47:08 np0005475493 ceph-mon[73572]: Deploying daemon keepalived.nfs.cephfs.compute-1.sbjzmp on compute-1
Oct  8 05:47:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:08 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:09 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 05:47:09 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v35: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct  8 05:47:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:09 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:09 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608002a20 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:10 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:11 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v36: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 05:47:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:11 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:11 2025: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Oct  8 05:47:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:11 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:47:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:47:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct  8 05:47:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:12 np0005475493 ceph-mgr[73869]: [progress INFO root] complete: finished ev 7ba10d6d-35d7-417a-acf8-1cda7124e4f2 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Oct  8 05:47:12 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event 7ba10d6d-35d7-417a-acf8-1cda7124e4f2 (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 29 seconds
Oct  8 05:47:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct  8 05:47:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:12 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev c79faeab-2ee3-4aba-a667-4c696cb5984a (Updating alertmanager deployment (+1 -> 1))
Oct  8 05:47:12 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Oct  8 05:47:12 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Oct  8 05:47:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:12 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003340 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:13 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:13 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:13 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:13 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:47:13
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['volumes', '.mgr', 'vms', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', '.nfs', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'default.rgw.meta']
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 05:47:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:13 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:47:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Oct  8 05:47:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [progress INFO root] Writing back 16 completed events
Oct  8 05:47:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  8 05:47:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:47:13 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:47:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:47:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:13 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614003f10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Oct  8 05:47:14 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct  8 05:47:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Oct  8 05:47:14 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Oct  8 05:47:14 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev f81b6bbc-4070-4d6d-ab15-864f1e35b4da (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct  8 05:47:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Oct  8 05:47:14 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct  8 05:47:14 np0005475493 ceph-mon[73572]: Deploying daemon alertmanager.compute-0 on compute-0
Oct  8 05:47:14 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct  8 05:47:14 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:14 np0005475493 podman[97052]: 2025-10-08 09:47:14.71350008 +0000 UTC m=+1.463676270 volume create e7fbce31307d52020c8fa218d057146ec835c7fd69c2b223d3901ba1f837055e
Oct  8 05:47:14 np0005475493 podman[97052]: 2025-10-08 09:47:14.728644447 +0000 UTC m=+1.478820627 container create ac58a3f1b2b9240bb8dba2f8ee7872dcfa9f74ae201642a7603b9964337a235e (image=quay.io/prometheus/alertmanager:v0.25.0, name=loving_elion, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:14 np0005475493 systemd[1]: Started libpod-conmon-ac58a3f1b2b9240bb8dba2f8ee7872dcfa9f74ae201642a7603b9964337a235e.scope.
Oct  8 05:47:14 np0005475493 podman[97052]: 2025-10-08 09:47:14.693821219 +0000 UTC m=+1.443997479 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct  8 05:47:14 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:47:14 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248d61c1de96cfce7a7e31ac5dae9b37d220e52aa8fd494e3bf9011ae3941936/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:14 np0005475493 podman[97052]: 2025-10-08 09:47:14.808322241 +0000 UTC m=+1.558498411 container init ac58a3f1b2b9240bb8dba2f8ee7872dcfa9f74ae201642a7603b9964337a235e (image=quay.io/prometheus/alertmanager:v0.25.0, name=loving_elion, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:14 np0005475493 podman[97052]: 2025-10-08 09:47:14.814490726 +0000 UTC m=+1.564666876 container start ac58a3f1b2b9240bb8dba2f8ee7872dcfa9f74ae201642a7603b9964337a235e (image=quay.io/prometheus/alertmanager:v0.25.0, name=loving_elion, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:14 np0005475493 loving_elion[97188]: 65534 65534
Oct  8 05:47:14 np0005475493 systemd[1]: libpod-ac58a3f1b2b9240bb8dba2f8ee7872dcfa9f74ae201642a7603b9964337a235e.scope: Deactivated successfully.
Oct  8 05:47:14 np0005475493 podman[97052]: 2025-10-08 09:47:14.823859891 +0000 UTC m=+1.574036131 container attach ac58a3f1b2b9240bb8dba2f8ee7872dcfa9f74ae201642a7603b9964337a235e (image=quay.io/prometheus/alertmanager:v0.25.0, name=loving_elion, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:14 np0005475493 podman[97052]: 2025-10-08 09:47:14.824604134 +0000 UTC m=+1.574780294 container died ac58a3f1b2b9240bb8dba2f8ee7872dcfa9f74ae201642a7603b9964337a235e (image=quay.io/prometheus/alertmanager:v0.25.0, name=loving_elion, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:14 np0005475493 systemd[1]: var-lib-containers-storage-overlay-248d61c1de96cfce7a7e31ac5dae9b37d220e52aa8fd494e3bf9011ae3941936-merged.mount: Deactivated successfully.
Oct  8 05:47:14 np0005475493 podman[97052]: 2025-10-08 09:47:14.873294631 +0000 UTC m=+1.623470781 container remove ac58a3f1b2b9240bb8dba2f8ee7872dcfa9f74ae201642a7603b9964337a235e (image=quay.io/prometheus/alertmanager:v0.25.0, name=loving_elion, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:14 np0005475493 podman[97052]: 2025-10-08 09:47:14.87742509 +0000 UTC m=+1.627601260 volume remove e7fbce31307d52020c8fa218d057146ec835c7fd69c2b223d3901ba1f837055e
Oct  8 05:47:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:14 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:14 np0005475493 systemd[1]: libpod-conmon-ac58a3f1b2b9240bb8dba2f8ee7872dcfa9f74ae201642a7603b9964337a235e.scope: Deactivated successfully.
Oct  8 05:47:14 np0005475493 podman[97207]: 2025-10-08 09:47:14.939333153 +0000 UTC m=+0.037903766 volume create 7c56196c125ab8ddf6545850be572c44c3507a2ce4af7c71a9194b008fa1e728
Oct  8 05:47:14 np0005475493 podman[97207]: 2025-10-08 09:47:14.948206083 +0000 UTC m=+0.046776696 container create adad2e4012eca1a7b05f9ee512eb07823fdf41746639e0e08f99947de7b43999 (image=quay.io/prometheus/alertmanager:v0.25.0, name=intelligent_dubinsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:14 np0005475493 systemd[1]: Started libpod-conmon-adad2e4012eca1a7b05f9ee512eb07823fdf41746639e0e08f99947de7b43999.scope.
Oct  8 05:47:15 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:47:15 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6698b89fd6504ea1cbd0075637e16204bd04b9c7acaa20998acd79970229f323/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:15 np0005475493 podman[97207]: 2025-10-08 09:47:14.922934846 +0000 UTC m=+0.021505479 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct  8 05:47:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Oct  8 05:47:15 np0005475493 podman[97207]: 2025-10-08 09:47:15.022961651 +0000 UTC m=+0.121532294 container init adad2e4012eca1a7b05f9ee512eb07823fdf41746639e0e08f99947de7b43999 (image=quay.io/prometheus/alertmanager:v0.25.0, name=intelligent_dubinsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct  8 05:47:15 np0005475493 podman[97207]: 2025-10-08 09:47:15.02895348 +0000 UTC m=+0.127524103 container start adad2e4012eca1a7b05f9ee512eb07823fdf41746639e0e08f99947de7b43999 (image=quay.io/prometheus/alertmanager:v0.25.0, name=intelligent_dubinsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Oct  8 05:47:15 np0005475493 intelligent_dubinsky[97223]: 65534 65534
Oct  8 05:47:15 np0005475493 systemd[1]: libpod-adad2e4012eca1a7b05f9ee512eb07823fdf41746639e0e08f99947de7b43999.scope: Deactivated successfully.
Oct  8 05:47:15 np0005475493 podman[97207]: 2025-10-08 09:47:15.032340587 +0000 UTC m=+0.130911220 container attach adad2e4012eca1a7b05f9ee512eb07823fdf41746639e0e08f99947de7b43999 (image=quay.io/prometheus/alertmanager:v0.25.0, name=intelligent_dubinsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:15 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Oct  8 05:47:15 np0005475493 podman[97207]: 2025-10-08 09:47:15.033544016 +0000 UTC m=+0.132114639 container died adad2e4012eca1a7b05f9ee512eb07823fdf41746639e0e08f99947de7b43999 (image=quay.io/prometheus/alertmanager:v0.25.0, name=intelligent_dubinsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:15 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev e7286b65-9033-43ec-a2dd-3b3dd3094fdb (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct  8 05:47:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Oct  8 05:47:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct  8 05:47:15 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct  8 05:47:15 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct  8 05:47:15 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct  8 05:47:15 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct  8 05:47:15 np0005475493 systemd[1]: var-lib-containers-storage-overlay-6698b89fd6504ea1cbd0075637e16204bd04b9c7acaa20998acd79970229f323-merged.mount: Deactivated successfully.
Oct  8 05:47:15 np0005475493 podman[97207]: 2025-10-08 09:47:15.074822177 +0000 UTC m=+0.173392790 container remove adad2e4012eca1a7b05f9ee512eb07823fdf41746639e0e08f99947de7b43999 (image=quay.io/prometheus/alertmanager:v0.25.0, name=intelligent_dubinsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:15 np0005475493 podman[97207]: 2025-10-08 09:47:15.078797082 +0000 UTC m=+0.177367695 volume remove 7c56196c125ab8ddf6545850be572c44c3507a2ce4af7c71a9194b008fa1e728
Oct  8 05:47:15 np0005475493 systemd[1]: libpod-conmon-adad2e4012eca1a7b05f9ee512eb07823fdf41746639e0e08f99947de7b43999.scope: Deactivated successfully.
Oct  8 05:47:15 np0005475493 systemd[1]: Reloading.
Oct  8 05:47:15 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:47:15 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:47:15 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v40: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 639 B/s wr, 2 op/s
Oct  8 05:47:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Oct  8 05:47:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  8 05:47:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Oct  8 05:47:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  8 05:47:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:15 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003340 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:15 np0005475493 systemd[1]: Reloading.
Oct  8 05:47:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:15 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:15 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:47:15 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:47:15 np0005475493 systemd[1]: Starting Ceph alertmanager.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:47:15 np0005475493 podman[97366]: 2025-10-08 09:47:15.880098898 +0000 UTC m=+0.033910030 volume create 00310bf376a0b175ca8d85fb11d168f2f95f64f3756abaadb6e57846efdbc0ea
Oct  8 05:47:15 np0005475493 podman[97366]: 2025-10-08 09:47:15.88995522 +0000 UTC m=+0.043766352 container create 8d342ba820880b4150da0520ca2c2bd15e2dae173214d906ae46d290c18c1a1e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/094715 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 05:47:15 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ce96f5b36afca03959d3dd28785acc44bc98ac7848532a544c80c3ee2cbbf3/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:15 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ce96f5b36afca03959d3dd28785acc44bc98ac7848532a544c80c3ee2cbbf3/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:15 np0005475493 podman[97366]: 2025-10-08 09:47:15.948799726 +0000 UTC m=+0.102610948 container init 8d342ba820880b4150da0520ca2c2bd15e2dae173214d906ae46d290c18c1a1e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:15 np0005475493 podman[97366]: 2025-10-08 09:47:15.953509874 +0000 UTC m=+0.107321046 container start 8d342ba820880b4150da0520ca2c2bd15e2dae173214d906ae46d290c18c1a1e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:15 np0005475493 bash[97366]: 8d342ba820880b4150da0520ca2c2bd15e2dae173214d906ae46d290c18c1a1e
Oct  8 05:47:15 np0005475493 podman[97366]: 2025-10-08 09:47:15.868402379 +0000 UTC m=+0.022213531 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct  8 05:47:15 np0005475493 systemd[1]: Started Ceph alertmanager.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:47:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[97382]: ts=2025-10-08T09:47:15.980Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Oct  8 05:47:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[97382]: ts=2025-10-08T09:47:15.980Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Oct  8 05:47:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[97382]: ts=2025-10-08T09:47:15.993Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Oct  8 05:47:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[97382]: ts=2025-10-08T09:47:15.995Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Oct  8 05:47:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[97382]: ts=2025-10-08T09:47:16.040Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Oct  8 05:47:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[97382]: ts=2025-10-08T09:47:16.041Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Oct  8 05:47:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[97382]: ts=2025-10-08T09:47:16.047Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Oct  8 05:47:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[97382]: ts=2025-10-08T09:47:16.047Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Oct  8 05:47:16 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev 93b10f5d-f027-4d2d-852f-db5ecd9fbce7 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:16 np0005475493 ceph-mgr[73869]: [progress INFO root] complete: finished ev c79faeab-2ee3-4aba-a667-4c696cb5984a (Updating alertmanager deployment (+1 -> 1))
Oct  8 05:47:16 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event c79faeab-2ee3-4aba-a667-4c696cb5984a (Updating alertmanager deployment (+1 -> 1)) in 3 seconds
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:16 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev 9550db9d-3c92-4760-9334-11f23ea86e6f (Updating grafana deployment (+1 -> 1))
Oct  8 05:47:16 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Oct  8 05:47:16 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct  8 05:47:16 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Oct  8 05:47:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:16 2025: (VI_0) Received advert from 192.168.122.101 with lower priority 90, ours 100, forcing new election
Oct  8 05:47:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:16 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Oct  8 05:47:16 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Oct  8 05:47:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:16 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614003f10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Oct  8 05:47:17 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v42: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct  8 05:47:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Oct  8 05:47:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  8 05:47:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct  8 05:47:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Oct  8 05:47:17 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 54 pg[9.0( v 45'1018 (0'0,45'1018] local-lis/les=38/39 n=178 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=54 pruub=14.250038147s) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 45'1017 mlcod 45'1017 active pruub 179.526794434s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 54 pg[8.0( v 37'12 (0'0,37'12] local-lis/les=36/37 n=6 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=54 pruub=12.293242455s) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 37'11 mlcod 37'11 active pruub 177.570617676s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:17 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev 5180560c-0a09-4c25-9066-0eb3d77771f3 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct  8 05:47:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Oct  8 05:47:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.0( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=54 pruub=12.293242455s) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 37'11 mlcod 0'0 unknown pruub 177.570617676s@ mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x559f2c6cefc0) operator()   moving buffer(0x559f2b2c85c8 space 0x559f2b24a1b0 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x559f2c6cefc0) operator()   moving buffer(0x559f2b2e3a68 space 0x559f2b3261b0 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x559f2c6cefc0) operator()   moving buffer(0x559f2b2e3388 space 0x559f2b326420 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x559f2c6cefc0) operator()   moving buffer(0x559f2b2e2f28 space 0x559f2b3265c0 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.4( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=1 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.2( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=1 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.1a( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.15( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.b( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.e( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.14( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.8( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.9( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.7( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.c( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.18( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.1e( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.3( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=1 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.17( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.1b( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.5( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=1 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.19( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.1f( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.1( v 37'12 (0'0,37'12] local-lis/les=36/37 n=1 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct  8 05:47:17 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct  8 05:47:17 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct  8 05:47:17 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  8 05:47:17 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:17 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:17 np0005475493 ceph-mon[73572]: Regenerating cephadm self-signed grafana TLS certificates
Oct  8 05:47:17 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.11( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.16( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct  8 05:47:17 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:17 np0005475493 ceph-mon[73572]: Deploying daemon grafana.compute-0 on compute-0
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.10( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.12( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.1d( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.f( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.13( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.1c( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.6( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=1 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.a( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[8.d( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=36/37 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.0( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=54 pruub=14.250038147s) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 45'1017 mlcod 0'0 unknown pruub 179.526794434s@ mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2abc8fc8 space 0x559f2b24d2c0 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2e5ec8 space 0x559f2b1d4760 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2d0c08 space 0x559f2b24d940 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2c8de8 space 0x559f2b24dc80 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2c8ca8 space 0x559f2b1d4900 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b323a68 space 0x559f2b24d120 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2ee668 space 0x559f2b1d57a0 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2e3748 space 0x559f2b1d49d0 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2d0d48 space 0x559f2b1d5120 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2e3248 space 0x559f2b1d4aa0 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2abc8b68 space 0x559f2b190760 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2eed48 space 0x559f2b3277a0 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2abc8208 space 0x559f2b24dae0 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2e5068 space 0x559f2b24c4f0 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:17 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2d1428 space 0x559f2b1d5460 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2abc9f68 space 0x559f2b0bad10 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b306f28 space 0x559f2b1d4de0 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2eeac8 space 0x559f2b1d4d10 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2abc9b08 space 0x559f2b24d600 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2d1b08 space 0x559f2b1d5390 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2d07a8 space 0x559f2b326350 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2ee0c8 space 0x559f2b1d4c40 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2abc9748 space 0x559f2b3260e0 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b06aa28 space 0x559f2b0ada10 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2ef4c8 space 0x559f2b327870 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2d1928 space 0x559f2b1d5530 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2abc9568 space 0x559f2b24d7a0 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b2ef9c8 space 0x559f2b327940 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2abc9ba8 space 0x559f2b24d460 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559f2c45e6c0) operator()   moving buffer(0x559f2b306028 space 0x559f2b1d56d0 0x0~1000 clean)
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.1( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.7( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.17( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.16( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.13( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.1e( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.10( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.4( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.b( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.1d( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.c( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.a( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.1b( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.19( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.3( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.6( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.e( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.1f( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.14( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.15( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.2( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.18( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.1a( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.5( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.9( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.11( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.d( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.8( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.f( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.12( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 55 pg[9.1c( v 45'1018 lc 0'0 (0'0,45'1018] local-lis/les=38/39 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:17 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003340 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[97382]: ts=2025-10-08T09:47:17.995Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000075209s
Oct  8 05:47:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Oct  8 05:47:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct  8 05:47:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Oct  8 05:47:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Oct  8 05:47:18 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Oct  8 05:47:18 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev e52731e3-e9d7-41cd-9989-1ba9708abc37 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Oct  8 05:47:18 np0005475493 ceph-mgr[73869]: [progress INFO root] complete: finished ev f81b6bbc-4070-4d6d-ab15-864f1e35b4da (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct  8 05:47:18 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event f81b6bbc-4070-4d6d-ab15-864f1e35b4da (PG autoscaler increasing pool 8 PGs from 1 to 32) in 4 seconds
Oct  8 05:47:18 np0005475493 ceph-mgr[73869]: [progress INFO root] complete: finished ev e7286b65-9033-43ec-a2dd-3b3dd3094fdb (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct  8 05:47:18 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event e7286b65-9033-43ec-a2dd-3b3dd3094fdb (PG autoscaler increasing pool 9 PGs from 1 to 32) in 3 seconds
Oct  8 05:47:18 np0005475493 ceph-mgr[73869]: [progress INFO root] complete: finished ev 93b10f5d-f027-4d2d-852f-db5ecd9fbce7 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct  8 05:47:18 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event 93b10f5d-f027-4d2d-852f-db5ecd9fbce7 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Oct  8 05:47:18 np0005475493 ceph-mgr[73869]: [progress INFO root] complete: finished ev 5180560c-0a09-4c25-9066-0eb3d77771f3 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct  8 05:47:18 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event 5180560c-0a09-4c25-9066-0eb3d77771f3 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Oct  8 05:47:18 np0005475493 ceph-mgr[73869]: [progress INFO root] complete: finished ev e52731e3-e9d7-41cd-9989-1ba9708abc37 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Oct  8 05:47:18 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event e52731e3-e9d7-41cd-9989-1ba9708abc37 (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.15( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  8 05:47:18 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct  8 05:47:18 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.14( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.17( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.14( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.16( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.15( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.16( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.17( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.10( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.11( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.10( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.3( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.2( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.2( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.11( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.f( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.8( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.8( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.9( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.9( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.b( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.a( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.e( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.f( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.c( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.d( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.d( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.3( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.c( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.a( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.b( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.e( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.1( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.0( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 45'1017 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.1( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.6( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.0( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 37'11 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.6( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.7( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.7( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.4( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.1a( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.1b( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.1b( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.1a( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.4( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.5( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.19( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.18( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.1e( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.1f( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.1f( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.18( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.1c( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.1d( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.1d( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.1c( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.5( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.12( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.1e( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.13( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.13( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[8.12( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=36/36 les/c/f=37/37/0 sis=54) [1] r=0 lpr=54 pi=[36,54)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 56 pg[9.19( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[38,54)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:18 np0005475493 ceph-mgr[73869]: [progress INFO root] Writing back 22 completed events
Oct  8 05:47:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  8 05:47:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:18 np0005475493 ceph-mgr[73869]: [progress WARNING root] Starting Global Recovery Event,93 pgs not in active + clean state
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Oct  8 05:47:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:47:18 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Oct  8 05:47:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:18 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003340 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:19 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v45: 291 pgs: 93 unknown, 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:47:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Oct  8 05:47:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  8 05:47:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Oct  8 05:47:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  8 05:47:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Oct  8 05:47:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Oct  8 05:47:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  8 05:47:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Oct  8 05:47:19 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Oct  8 05:47:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:19 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614003f10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:19 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct  8 05:47:19 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Oct  8 05:47:19 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:19 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  8 05:47:19 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  8 05:47:19 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Oct  8 05:47:19 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  8 05:47:19 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Oct  8 05:47:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:19 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:19 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 57 pg[11.0( empty local-lis/les=42/43 n=0 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=57 pruub=15.975506783s) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active pruub 183.588607788s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:19 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 57 pg[11.0( empty local-lis/les=42/43 n=0 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=57 pruub=15.975506783s) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown pruub 183.588607788s@ mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:19 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Oct  8 05:47:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Oct  8 05:47:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Oct  8 05:47:20 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.17( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.16( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.15( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.14( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.13( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.12( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.c( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.b( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.a( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.9( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.d( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.e( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.f( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.8( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.2( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.3( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.4( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.6( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.7( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.18( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.5( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.19( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1a( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1b( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1c( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1d( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1e( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1f( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.10( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.11( empty local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.17( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.16( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.15( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.14( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.13( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.12( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.0( empty local-lis/les=57/58 n=0 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.c( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.b( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.a( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.9( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.d( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.e( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.f( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.8( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.2( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.3( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.4( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.6( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.5( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.7( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.19( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1a( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1c( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1d( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1e( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1f( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.10( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.18( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.1b( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 58 pg[11.11( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Oct  8 05:47:20 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Oct  8 05:47:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:20 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:21 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v48: 353 pgs: 1 peering, 93 unknown, 259 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:47:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:21 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:21 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Oct  8 05:47:21 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Oct  8 05:47:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:21 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614003f10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:21 np0005475493 podman[97496]: 2025-10-08 09:47:21.882297998 +0000 UTC m=+5.073190577 container create e991218ef3d77ff81f4a0956b6966115402430f47ed23b9b66696c8b685d0c51 (image=quay.io/ceph/grafana:10.4.0, name=adoring_feistel, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:47:21 np0005475493 systemd[1]: Started libpod-conmon-e991218ef3d77ff81f4a0956b6966115402430f47ed23b9b66696c8b685d0c51.scope.
Oct  8 05:47:21 np0005475493 podman[97496]: 2025-10-08 09:47:21.86398995 +0000 UTC m=+5.054882549 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct  8 05:47:21 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:47:21 np0005475493 podman[97496]: 2025-10-08 09:47:21.965338046 +0000 UTC m=+5.156230635 container init e991218ef3d77ff81f4a0956b6966115402430f47ed23b9b66696c8b685d0c51 (image=quay.io/ceph/grafana:10.4.0, name=adoring_feistel, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:47:21 np0005475493 podman[97496]: 2025-10-08 09:47:21.972596546 +0000 UTC m=+5.163489135 container start e991218ef3d77ff81f4a0956b6966115402430f47ed23b9b66696c8b685d0c51 (image=quay.io/ceph/grafana:10.4.0, name=adoring_feistel, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:47:21 np0005475493 podman[97496]: 2025-10-08 09:47:21.976624922 +0000 UTC m=+5.167517511 container attach e991218ef3d77ff81f4a0956b6966115402430f47ed23b9b66696c8b685d0c51 (image=quay.io/ceph/grafana:10.4.0, name=adoring_feistel, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:47:21 np0005475493 adoring_feistel[97718]: 472 0
Oct  8 05:47:21 np0005475493 systemd[1]: libpod-e991218ef3d77ff81f4a0956b6966115402430f47ed23b9b66696c8b685d0c51.scope: Deactivated successfully.
Oct  8 05:47:21 np0005475493 podman[97496]: 2025-10-08 09:47:21.978700759 +0000 UTC m=+5.169593328 container died e991218ef3d77ff81f4a0956b6966115402430f47ed23b9b66696c8b685d0c51 (image=quay.io/ceph/grafana:10.4.0, name=adoring_feistel, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:47:22 np0005475493 systemd[1]: var-lib-containers-storage-overlay-72bb59340fec070e33f6e73b735c61ee0a23b95db8b46154866420f69b84dcf5-merged.mount: Deactivated successfully.
Oct  8 05:47:22 np0005475493 podman[97496]: 2025-10-08 09:47:22.031652599 +0000 UTC m=+5.222545178 container remove e991218ef3d77ff81f4a0956b6966115402430f47ed23b9b66696c8b685d0c51 (image=quay.io/ceph/grafana:10.4.0, name=adoring_feistel, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:47:22 np0005475493 systemd[1]: libpod-conmon-e991218ef3d77ff81f4a0956b6966115402430f47ed23b9b66696c8b685d0c51.scope: Deactivated successfully.
Oct  8 05:47:22 np0005475493 podman[97735]: 2025-10-08 09:47:22.097355171 +0000 UTC m=+0.041841871 container create 2d9f2683e24337120c32f398115d175038ee03cf7db3bbf27d44f93018c66e61 (image=quay.io/ceph/grafana:10.4.0, name=fervent_hopper, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:47:22 np0005475493 systemd[1]: Started libpod-conmon-2d9f2683e24337120c32f398115d175038ee03cf7db3bbf27d44f93018c66e61.scope.
Oct  8 05:47:22 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:47:22 np0005475493 podman[97735]: 2025-10-08 09:47:22.150844858 +0000 UTC m=+0.095331588 container init 2d9f2683e24337120c32f398115d175038ee03cf7db3bbf27d44f93018c66e61 (image=quay.io/ceph/grafana:10.4.0, name=fervent_hopper, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:47:22 np0005475493 podman[97735]: 2025-10-08 09:47:22.15596638 +0000 UTC m=+0.100453130 container start 2d9f2683e24337120c32f398115d175038ee03cf7db3bbf27d44f93018c66e61 (image=quay.io/ceph/grafana:10.4.0, name=fervent_hopper, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:47:22 np0005475493 fervent_hopper[97751]: 472 0
Oct  8 05:47:22 np0005475493 systemd[1]: libpod-2d9f2683e24337120c32f398115d175038ee03cf7db3bbf27d44f93018c66e61.scope: Deactivated successfully.
Oct  8 05:47:22 np0005475493 podman[97735]: 2025-10-08 09:47:22.161634188 +0000 UTC m=+0.106120898 container attach 2d9f2683e24337120c32f398115d175038ee03cf7db3bbf27d44f93018c66e61 (image=quay.io/ceph/grafana:10.4.0, name=fervent_hopper, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:47:22 np0005475493 podman[97735]: 2025-10-08 09:47:22.16200217 +0000 UTC m=+0.106488880 container died 2d9f2683e24337120c32f398115d175038ee03cf7db3bbf27d44f93018c66e61 (image=quay.io/ceph/grafana:10.4.0, name=fervent_hopper, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:47:22 np0005475493 podman[97735]: 2025-10-08 09:47:22.079262631 +0000 UTC m=+0.023749361 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct  8 05:47:22 np0005475493 systemd[1]: var-lib-containers-storage-overlay-a014035d7ebe0cf60c822fbfe58852474930720f1424ae3978fdb2ca08872ad6-merged.mount: Deactivated successfully.
Oct  8 05:47:22 np0005475493 podman[97735]: 2025-10-08 09:47:22.203277203 +0000 UTC m=+0.147763943 container remove 2d9f2683e24337120c32f398115d175038ee03cf7db3bbf27d44f93018c66e61 (image=quay.io/ceph/grafana:10.4.0, name=fervent_hopper, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:47:22 np0005475493 systemd[1]: libpod-conmon-2d9f2683e24337120c32f398115d175038ee03cf7db3bbf27d44f93018c66e61.scope: Deactivated successfully.
Oct  8 05:47:22 np0005475493 systemd[1]: Reloading.
Oct  8 05:47:22 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:47:22 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:47:22 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Oct  8 05:47:22 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Oct  8 05:47:22 np0005475493 systemd[1]: Reloading.
Oct  8 05:47:22 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:47:22 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:47:22 np0005475493 python3[97831]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:47:22 np0005475493 podman[97869]: 2025-10-08 09:47:22.855897998 +0000 UTC m=+0.042225583 container create 50f1c0b74f27cdc0e4078d5c8aa1f387160fda33bc20096de6b9010beda70ead (image=quay.io/ceph/ceph:v19, name=cool_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  8 05:47:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:22 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:22 np0005475493 systemd[1]: Started libpod-conmon-50f1c0b74f27cdc0e4078d5c8aa1f387160fda33bc20096de6b9010beda70ead.scope.
Oct  8 05:47:22 np0005475493 systemd[1]: Starting Ceph grafana.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:47:22 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:47:22 np0005475493 podman[97869]: 2025-10-08 09:47:22.837285111 +0000 UTC m=+0.023612716 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:47:22 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f750cfd5cc883c54b45ef76d6b79714621ed94c20e0f08145e2cd8bf14557cf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:22 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f750cfd5cc883c54b45ef76d6b79714621ed94c20e0f08145e2cd8bf14557cf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:22 np0005475493 podman[97869]: 2025-10-08 09:47:22.95011547 +0000 UTC m=+0.136443085 container init 50f1c0b74f27cdc0e4078d5c8aa1f387160fda33bc20096de6b9010beda70ead (image=quay.io/ceph/ceph:v19, name=cool_turing, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:47:22 np0005475493 podman[97869]: 2025-10-08 09:47:22.956087059 +0000 UTC m=+0.142414644 container start 50f1c0b74f27cdc0e4078d5c8aa1f387160fda33bc20096de6b9010beda70ead (image=quay.io/ceph/ceph:v19, name=cool_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  8 05:47:22 np0005475493 podman[97869]: 2025-10-08 09:47:22.959128055 +0000 UTC m=+0.145455640 container attach 50f1c0b74f27cdc0e4078d5c8aa1f387160fda33bc20096de6b9010beda70ead (image=quay.io/ceph/ceph:v19, name=cool_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:47:23 np0005475493 cool_turing[97887]: could not fetch user info: no user info saved
Oct  8 05:47:23 np0005475493 podman[98013]: 2025-10-08 09:47:23.119689169 +0000 UTC m=+0.042957495 container create 56b2a7b6ecfb4c45a674b20b3f1d60f53e5d4fac0af3b6685d98e6cab4ce59f5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:47:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/299d1132a49e90b1d598865e6a36f1a7dd2aea77757b20cf4893ea1efcfcb275/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/299d1132a49e90b1d598865e6a36f1a7dd2aea77757b20cf4893ea1efcfcb275/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/299d1132a49e90b1d598865e6a36f1a7dd2aea77757b20cf4893ea1efcfcb275/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/299d1132a49e90b1d598865e6a36f1a7dd2aea77757b20cf4893ea1efcfcb275/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/299d1132a49e90b1d598865e6a36f1a7dd2aea77757b20cf4893ea1efcfcb275/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:23 np0005475493 systemd[1]: libpod-50f1c0b74f27cdc0e4078d5c8aa1f387160fda33bc20096de6b9010beda70ead.scope: Deactivated successfully.
Oct  8 05:47:23 np0005475493 conmon[97887]: conmon 50f1c0b74f27cdc0e407 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-50f1c0b74f27cdc0e4078d5c8aa1f387160fda33bc20096de6b9010beda70ead.scope/container/memory.events
Oct  8 05:47:23 np0005475493 podman[97869]: 2025-10-08 09:47:23.186498516 +0000 UTC m=+0.372826111 container died 50f1c0b74f27cdc0e4078d5c8aa1f387160fda33bc20096de6b9010beda70ead (image=quay.io/ceph/ceph:v19, name=cool_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  8 05:47:23 np0005475493 podman[98013]: 2025-10-08 09:47:23.187441176 +0000 UTC m=+0.110709482 container init 56b2a7b6ecfb4c45a674b20b3f1d60f53e5d4fac0af3b6685d98e6cab4ce59f5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:47:23 np0005475493 podman[98013]: 2025-10-08 09:47:23.192382562 +0000 UTC m=+0.115650858 container start 56b2a7b6ecfb4c45a674b20b3f1d60f53e5d4fac0af3b6685d98e6cab4ce59f5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:47:23 np0005475493 podman[98013]: 2025-10-08 09:47:23.098045687 +0000 UTC m=+0.021314013 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct  8 05:47:23 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v49: 353 pgs: 1 peering, 93 unknown, 259 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:47:23 np0005475493 bash[98013]: 56b2a7b6ecfb4c45a674b20b3f1d60f53e5d4fac0af3b6685d98e6cab4ce59f5
Oct  8 05:47:23 np0005475493 systemd[1]: Started Ceph grafana.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:47:23 np0005475493 systemd[1]: var-lib-containers-storage-overlay-9f750cfd5cc883c54b45ef76d6b79714621ed94c20e0f08145e2cd8bf14557cf-merged.mount: Deactivated successfully.
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:23 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8000d90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:23 np0005475493 podman[97869]: 2025-10-08 09:47:23.240968045 +0000 UTC m=+0.427295630 container remove 50f1c0b74f27cdc0e4078d5c8aa1f387160fda33bc20096de6b9010beda70ead (image=quay.io/ceph/ceph:v19, name=cool_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:47:23 np0005475493 systemd[1]: libpod-conmon-50f1c0b74f27cdc0e4078d5c8aa1f387160fda33bc20096de6b9010beda70ead.scope: Deactivated successfully.
Oct  8 05:47:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:47:23 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:47:23 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Oct  8 05:47:23 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:23 np0005475493 ceph-mgr[73869]: [progress INFO root] complete: finished ev 9550db9d-3c92-4760-9334-11f23ea86e6f (Updating grafana deployment (+1 -> 1))
Oct  8 05:47:23 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event 9550db9d-3c92-4760-9334-11f23ea86e6f (Updating grafana deployment (+1 -> 1)) in 7 seconds
Oct  8 05:47:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Oct  8 05:47:23 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:23 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev eb90faac-447e-4af6-82aa-528626b39460 (Updating ingress.rgw.default deployment (+4 -> 4))
Oct  8 05:47:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Oct  8 05:47:23 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:23 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.zadvee on compute-0
Oct  8 05:47:23 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.zadvee on compute-0
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.363525741Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-10-08T09:47:23Z
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.363760508Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.363772358Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.363776389Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.363779909Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.363783469Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.363787129Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.363791749Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.363795519Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.363799329Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.363803189Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.36380773Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.36381219Z level=info msg=Target target=[all]
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.36381898Z level=info msg="Path Home" path=/usr/share/grafana
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.36382308Z level=info msg="Path Data" path=/var/lib/grafana
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.36382735Z level=info msg="Path Logs" path=/var/log/grafana
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.36383128Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.36383528Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=settings t=2025-10-08T09:47:23.363838631Z level=info msg="App mode production"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=sqlstore t=2025-10-08T09:47:23.364117969Z level=info msg="Connecting to DB" dbtype=sqlite3
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=sqlstore t=2025-10-08T09:47:23.36413979Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.364693748Z level=info msg="Starting DB migrations"
Oct  8 05:47:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.365804262Z level=info msg="Executing migration" id="create migration_log table"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.366931908Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.128156ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.37045974Z level=info msg="Executing migration" id="create user table"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.371220483Z level=info msg="Migration successfully executed" id="create user table" duration=760.463µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.372858955Z level=info msg="Executing migration" id="add unique index user.login"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.373432694Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=573.779µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.375350914Z level=info msg="Executing migration" id="add unique index user.email"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.375928722Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=577.748µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.377508462Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.378304987Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=795.925µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.380023411Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.380718503Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=694.552µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.382322583Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.385026908Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.700715ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.386820476Z level=info msg="Executing migration" id="create user table v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.38760154Z level=info msg="Migration successfully executed" id="create user table v2" duration=780.234µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.390175222Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.390807151Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=631.629µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.392547646Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.393185066Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=637.07µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.39518904Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.39551368Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=324.64µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.397188012Z level=info msg="Executing migration" id="Drop old table user_v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.397717129Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=526.837µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.399710822Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.40059622Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=887.058µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.403278295Z level=info msg="Executing migration" id="Update user table charset"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.403303436Z level=info msg="Migration successfully executed" id="Update user table charset" duration=25.91µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.404957437Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.405827025Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=869.258µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.407424985Z level=info msg="Executing migration" id="Add missing user data"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.407644162Z level=info msg="Migration successfully executed" id="Add missing user data" duration=218.897µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.411817514Z level=info msg="Executing migration" id="Add is_disabled column to user"
Oct  8 05:47:23 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.413192517Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.373973ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.414974014Z level=info msg="Executing migration" id="Add index user.login/user.email"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.415697556Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=722.642µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.417581746Z level=info msg="Executing migration" id="Add is_service_account column to user"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.418491684Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=909.958µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.420307332Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.427101386Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=6.791324ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.429196303Z level=info msg="Executing migration" id="Add uid column to user"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.430161392Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=965.07µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.432497376Z level=info msg="Executing migration" id="Update uid column values for users"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.432671921Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=176.825µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.435063828Z level=info msg="Executing migration" id="Add unique index user_uid"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.435702597Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=639.189µs
Oct  8 05:47:23 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.440386816Z level=info msg="Executing migration" id="create temp user table v1-7"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.441357196Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=970.43µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.445684523Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.446413345Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=728.653µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:23 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.448339766Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.449014608Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=674.552µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.450957629Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.451516767Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=559.138µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.453196139Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.453739987Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=543.987µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.455475771Z level=info msg="Executing migration" id="Update temp_user table charset"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.455523922Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=48.641µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.457237447Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.457812504Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=570.907µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.459984694Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.460559561Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=576.287µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.463831014Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.464426613Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=597.409µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.466079105Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.466765367Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=686.302µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.469793643Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.472312422Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=2.518239ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.474705077Z level=info msg="Executing migration" id="create temp_user v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.475367288Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=662.331µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.477490215Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.478100345Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=609.82µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.48302793Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.483705541Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=678.361µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.486130958Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.486730167Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=599.539µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.489724261Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.490415243Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=692.512µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.493206111Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.493604284Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=398.123µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.49632664Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.497134585Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=811.155µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.499142769Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.49953302Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=392.041µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.502135893Z level=info msg="Executing migration" id="create star table"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.502660519Z level=info msg="Migration successfully executed" id="create star table" duration=524.536µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.50524241Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.505838649Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=596.249µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.507515303Z level=info msg="Executing migration" id="create org table v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.508096401Z level=info msg="Migration successfully executed" id="create org table v1" duration=581.068µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.510186527Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.510763445Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=576.748µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.514889335Z level=info msg="Executing migration" id="create org_user table v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.515482364Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=592.299µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.517936821Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.519092518Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.153317ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.521916066Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.523053293Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.122276ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.525789329Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.526741219Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=952µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.528764943Z level=info msg="Executing migration" id="Update org table charset"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.528798304Z level=info msg="Migration successfully executed" id="Update org table charset" duration=34.801µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.531079466Z level=info msg="Executing migration" id="Update org_user table charset"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.531109547Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=31.381µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.533881904Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.534167543Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=286.399µs
Oct  8 05:47:23 np0005475493 python3[98114]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 787292cc-8154-50c4-9e00-e9be3e817149 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.536384183Z level=info msg="Executing migration" id="create dashboard table"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.537388055Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.002342ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.540927607Z level=info msg="Executing migration" id="add index dashboard.account_id"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.541694Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=767.053µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.543769597Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.544598322Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=828.766µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.546527633Z level=info msg="Executing migration" id="create dashboard_tag table"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.547157863Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=629.469µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.54895267Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.549685073Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=732.803µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.551541271Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.55243871Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=898.149µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.554677Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.560924798Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=6.242328ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.563631833Z level=info msg="Executing migration" id="create dashboard v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.564423567Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=792.544µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.568236078Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.56924346Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.007482ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.571229583Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.571976056Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=747.204µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.575877639Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.576254801Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=376.822µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.579687949Z level=info msg="Executing migration" id="drop table dashboard_v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.580615789Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=928.15µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.586310948Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.586420611Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=111.354µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.589216279Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Oct  8 05:47:23 np0005475493 podman[98144]: 2025-10-08 09:47:23.589496658 +0000 UTC m=+0.040779477 container create dc5c510f959ee980d665744efef7a9a3cccbb12affceccac875af2186e9116cb (image=quay.io/ceph/ceph:v19, name=busy_black, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.590832791Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.615332ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.592691169Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.594390743Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.699804ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.596417237Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.597831331Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.415054ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.60190454Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.602654324Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=749.224µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.6079268Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.609785428Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.864278ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.612801304Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.61395593Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.156796ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.616672536Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.617348837Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=676.201µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.621491598Z level=info msg="Executing migration" id="Update dashboard table charset"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.621515199Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=24.401µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.625183585Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.625204875Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=22.38µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.63140505Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.632883127Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.478327ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.63457639Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.635989035Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.412645ms
Oct  8 05:47:23 np0005475493 systemd[1]: Started libpod-conmon-dc5c510f959ee980d665744efef7a9a3cccbb12affceccac875af2186e9116cb.scope.
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.638812024Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.640550089Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.737805ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.64249229Z level=info msg="Executing migration" id="Add column uid in dashboard"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.643937236Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.444666ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.646392763Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.646586459Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=193.296µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.648907203Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.649578014Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=670.182µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.651285557Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.652087563Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=799.216µs
Oct  8 05:47:23 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:47:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/158173a8601ef455c874590a082955d8a4e8ee2f60a959f6a275ea7b73a78840/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/158173a8601ef455c874590a082955d8a4e8ee2f60a959f6a275ea7b73a78840/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.662571913Z level=info msg="Executing migration" id="Update dashboard title length"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.662602254Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=35.271µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.664871416Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Oct  8 05:47:23 np0005475493 podman[98144]: 2025-10-08 09:47:23.665453824 +0000 UTC m=+0.116736673 container init dc5c510f959ee980d665744efef7a9a3cccbb12affceccac875af2186e9116cb (image=quay.io/ceph/ceph:v19, name=busy_black, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.665587449Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=717.632µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.667652993Z level=info msg="Executing migration" id="create dashboard_provisioning"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.668268723Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=615.74µs
Oct  8 05:47:23 np0005475493 podman[98144]: 2025-10-08 09:47:23.573207545 +0000 UTC m=+0.024490364 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.67004946Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Oct  8 05:47:23 np0005475493 podman[98144]: 2025-10-08 09:47:23.671377581 +0000 UTC m=+0.122660400 container start dc5c510f959ee980d665744efef7a9a3cccbb12affceccac875af2186e9116cb (image=quay.io/ceph/ceph:v19, name=busy_black, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.67386196Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=3.809579ms
Oct  8 05:47:23 np0005475493 podman[98144]: 2025-10-08 09:47:23.675326006 +0000 UTC m=+0.126608825 container attach dc5c510f959ee980d665744efef7a9a3cccbb12affceccac875af2186e9116cb (image=quay.io/ceph/ceph:v19, name=busy_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.675720899Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.676274115Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=553.226µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.678207907Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.678845827Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=637.189µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.681353287Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.681967445Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=613.899µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.688564864Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.689200874Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=639.331µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.691209737Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.691901869Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=692.162µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.693874061Z level=info msg="Executing migration" id="Add check_sum column"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.695792391Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.91815ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.69825837Z level=info msg="Executing migration" id="Add index for dashboard_title"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.699124556Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=866.926µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.701224852Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.702173533Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=948.501µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.704706513Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.70493864Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=232.528µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.707445009Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.708207863Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=762.704µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.710685761Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Oct  8 05:47:23 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:23 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:23 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:23 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:23 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.713205601Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.51869ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.71539889Z level=info msg="Executing migration" id="create data_source table"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.716264847Z level=info msg="Migration successfully executed" id="create data_source table" duration=865.937µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.720962075Z level=info msg="Executing migration" id="add index data_source.account_id"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.72173513Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=774.965µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.723880967Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.72459123Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=710.083µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.726538251Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.727247914Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=709.483µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.73030902Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.731334243Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.025453ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.735254506Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.739881563Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=4.626647ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.742133354Z level=info msg="Executing migration" id="create data_source table v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.74330172Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.167587ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.747538474Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.748482934Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=944.63µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.753522493Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.75441458Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=892.117µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.757169407Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.757974553Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=800.906µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.760052258Z level=info msg="Executing migration" id="Add column with_credentials"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.76198689Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.935362ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.76390294Z level=info msg="Executing migration" id="Add secure json data column"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.765772659Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=1.869929ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.770435616Z level=info msg="Executing migration" id="Update data_source table charset"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.770778767Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=342.091µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.774921467Z level=info msg="Executing migration" id="Update initial version to 1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.775273909Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=351.242µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.77723374Z level=info msg="Executing migration" id="Add read_only data column"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.779186092Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=1.952422ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.781614749Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.781872187Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=258.488µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.783811068Z level=info msg="Executing migration" id="Update json_data with nulls"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.784144099Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=333.381µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.786577275Z level=info msg="Executing migration" id="Add uid column"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.788672351Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.094566ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.791280663Z level=info msg="Executing migration" id="Update uid value"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.791551122Z level=info msg="Migration successfully executed" id="Update uid value" duration=270.839µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.795576589Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.797089507Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.516907ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.799352878Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.800701271Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.350223ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.803618623Z level=info msg="Executing migration" id="create api_key table"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.804903113Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.28371ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.80767273Z level=info msg="Executing migration" id="add index api_key.account_id"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.80922333Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.55063ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.812276016Z level=info msg="Executing migration" id="add index api_key.key"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.813272367Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=996.781µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.815234349Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.816356254Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.124495ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.821537718Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.822785277Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.24923ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.824721198Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.826231686Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.510158ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.830401537Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.831577955Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.177348ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.834365203Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.839178074Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=4.814751ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.841200088Z level=info msg="Executing migration" id="create api_key table v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.841820128Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=615.819µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.845471743Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.846162355Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=687.842µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.850569914Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.851462672Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=895.388µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.853497616Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.854218939Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=719.313µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.859875148Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.860296281Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=422.104µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.862077267Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.862641745Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=565.018µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.868542301Z level=info msg="Executing migration" id="Update api_key table charset"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.868586952Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=48.211µs
Oct  8 05:47:23 np0005475493 busy_black[98160]: {
Oct  8 05:47:23 np0005475493 busy_black[98160]:    "user_id": "openstack",
Oct  8 05:47:23 np0005475493 busy_black[98160]:    "display_name": "openstack",
Oct  8 05:47:23 np0005475493 busy_black[98160]:    "email": "",
Oct  8 05:47:23 np0005475493 busy_black[98160]:    "suspended": 0,
Oct  8 05:47:23 np0005475493 busy_black[98160]:    "max_buckets": 1000,
Oct  8 05:47:23 np0005475493 busy_black[98160]:    "subusers": [],
Oct  8 05:47:23 np0005475493 busy_black[98160]:    "keys": [
Oct  8 05:47:23 np0005475493 busy_black[98160]:        {
Oct  8 05:47:23 np0005475493 busy_black[98160]:            "user": "openstack",
Oct  8 05:47:23 np0005475493 busy_black[98160]:            "access_key": "32PZJT640EWC6V5K10TY",
Oct  8 05:47:23 np0005475493 busy_black[98160]:            "secret_key": "Fa9b6AD4bUkQZvXtdLMApI7GwoxTHPqfY3ShJGwI",
Oct  8 05:47:23 np0005475493 busy_black[98160]:            "active": true,
Oct  8 05:47:23 np0005475493 busy_black[98160]:            "create_date": "2025-10-08T09:47:23.852017Z"
Oct  8 05:47:23 np0005475493 busy_black[98160]:        }
Oct  8 05:47:23 np0005475493 busy_black[98160]:    ],
Oct  8 05:47:23 np0005475493 busy_black[98160]:    "swift_keys": [],
Oct  8 05:47:23 np0005475493 busy_black[98160]:    "caps": [],
Oct  8 05:47:23 np0005475493 busy_black[98160]:    "op_mask": "read, write, delete",
Oct  8 05:47:23 np0005475493 busy_black[98160]:    "default_placement": "",
Oct  8 05:47:23 np0005475493 busy_black[98160]:    "default_storage_class": "",
Oct  8 05:47:23 np0005475493 busy_black[98160]:    "placement_tags": [],
Oct  8 05:47:23 np0005475493 busy_black[98160]:    "bucket_quota": {
Oct  8 05:47:23 np0005475493 busy_black[98160]:        "enabled": false,
Oct  8 05:47:23 np0005475493 busy_black[98160]:        "check_on_raw": false,
Oct  8 05:47:23 np0005475493 busy_black[98160]:        "max_size": -1,
Oct  8 05:47:23 np0005475493 busy_black[98160]:        "max_size_kb": 0,
Oct  8 05:47:23 np0005475493 busy_black[98160]:        "max_objects": -1
Oct  8 05:47:23 np0005475493 busy_black[98160]:    },
Oct  8 05:47:23 np0005475493 busy_black[98160]:    "user_quota": {
Oct  8 05:47:23 np0005475493 busy_black[98160]:        "enabled": false,
Oct  8 05:47:23 np0005475493 busy_black[98160]:        "check_on_raw": false,
Oct  8 05:47:23 np0005475493 busy_black[98160]:        "max_size": -1,
Oct  8 05:47:23 np0005475493 busy_black[98160]:        "max_size_kb": 0,
Oct  8 05:47:23 np0005475493 busy_black[98160]:        "max_objects": -1
Oct  8 05:47:23 np0005475493 busy_black[98160]:    },
Oct  8 05:47:23 np0005475493 busy_black[98160]:    "temp_url_keys": [],
Oct  8 05:47:23 np0005475493 busy_black[98160]:    "type": "rgw",
Oct  8 05:47:23 np0005475493 busy_black[98160]:    "mfa_ids": [],
Oct  8 05:47:23 np0005475493 busy_black[98160]:    "account_id": "",
Oct  8 05:47:23 np0005475493 busy_black[98160]:    "path": "/",
Oct  8 05:47:23 np0005475493 busy_black[98160]:    "create_date": "2025-10-08T09:47:23.851736Z",
Oct  8 05:47:23 np0005475493 busy_black[98160]:    "tags": [],
Oct  8 05:47:23 np0005475493 busy_black[98160]:    "group_ids": []
Oct  8 05:47:23 np0005475493 busy_black[98160]: }
Oct  8 05:47:23 np0005475493 busy_black[98160]: 
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.872722712Z level=info msg="Executing migration" id="Add expires to api_key table"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.875555722Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.83864ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.87738679Z level=info msg="Executing migration" id="Add service account foreign key"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.880464216Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=3.076776ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.882496781Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.882716028Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=223.827µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.884664089Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.887765097Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=3.099648ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.889577474Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Oct  8 05:47:23 np0005475493 podman[98281]: 2025-10-08 09:47:23.890150282 +0000 UTC m=+0.041145068 container create 6d7c303463f92ed51ec0d120bcf71f4dc1a2f279a809c2cfcd0790db62e25a87 (image=quay.io/ceph/haproxy:2.3, name=confident_driscoll)
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.892757985Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=3.17943ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.894860391Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.895993967Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=1.133727ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.899685464Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.900377575Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=695.392µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.902755099Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.903532304Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=777.455µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.905342692Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.906120816Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=777.814µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.90782146Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.90848555Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=662.38µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.910451183Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.911225727Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=774.374µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.913087966Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.913129307Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=41.461µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.914726947Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.914751028Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=24.771µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.916752581Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.919296672Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.543411ms
Oct  8 05:47:23 np0005475493 systemd[1]: Started libpod-conmon-6d7c303463f92ed51ec0d120bcf71f4dc1a2f279a809c2cfcd0790db62e25a87.scope.
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.921535293Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.923609158Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.071006ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.927214342Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.927259583Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=45.681µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.928839793Z level=info msg="Executing migration" id="create quota table v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.929412811Z level=info msg="Migration successfully executed" id="create quota table v1" duration=572.929µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.931572689Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.932413056Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=840.336µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.934115469Z level=info msg="Executing migration" id="Update quota table charset"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.934135499Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=20.65µs
Oct  8 05:47:23 np0005475493 podman[98144]: 2025-10-08 09:47:23.935123751 +0000 UTC m=+0.386406600 container died dc5c510f959ee980d665744efef7a9a3cccbb12affceccac875af2186e9116cb (image=quay.io/ceph/ceph:v19, name=busy_black, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.936797683Z level=info msg="Executing migration" id="create plugin_setting table"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.937398293Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=600.74µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.94079025Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.94143564Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=644.91µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.944050303Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.946093127Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.042164ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.948797982Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.948818743Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=21.261µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.950545537Z level=info msg="Executing migration" id="create session table"
Oct  8 05:47:23 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:47:23 np0005475493 systemd[1]: libpod-dc5c510f959ee980d665744efef7a9a3cccbb12affceccac875af2186e9116cb.scope: Deactivated successfully.
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.951601441Z level=info msg="Migration successfully executed" id="create session table" duration=1.026143ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.953769689Z level=info msg="Executing migration" id="Drop old table playlist table"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.953872652Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=103.433µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.955467063Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.955558886Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=92.013µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.957734284Z level=info msg="Executing migration" id="create playlist table v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.958344834Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=610.59µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.960057887Z level=info msg="Executing migration" id="create playlist item table v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.960663317Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=605.64µs
Oct  8 05:47:23 np0005475493 podman[98281]: 2025-10-08 09:47:23.869003116 +0000 UTC m=+0.019997942 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.966110088Z level=info msg="Executing migration" id="Update playlist table charset"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.966132949Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=23.421µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.967967217Z level=info msg="Executing migration" id="Update playlist_item table charset"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.967989907Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=23.63µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.969595649Z level=info msg="Executing migration" id="Add playlist column created_at"
Oct  8 05:47:23 np0005475493 systemd[1]: var-lib-containers-storage-overlay-158173a8601ef455c874590a082955d8a4e8ee2f60a959f6a275ea7b73a78840-merged.mount: Deactivated successfully.
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.971969433Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.372944ms
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.976743384Z level=info msg="Executing migration" id="Add playlist column updated_at"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.979319275Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.578972ms
Oct  8 05:47:23 np0005475493 podman[98281]: 2025-10-08 09:47:23.981146302 +0000 UTC m=+0.132141098 container init 6d7c303463f92ed51ec0d120bcf71f4dc1a2f279a809c2cfcd0790db62e25a87 (image=quay.io/ceph/haproxy:2.3, name=confident_driscoll)
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.982507025Z level=info msg="Executing migration" id="drop preferences table v2"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.982627339Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=121.014µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.984986334Z level=info msg="Executing migration" id="drop preferences table v3"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.985137708Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=151.105µs
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.987004687Z level=info msg="Executing migration" id="create preferences table v3"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.98773244Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=728.183µs
Oct  8 05:47:23 np0005475493 podman[98281]: 2025-10-08 09:47:23.988618078 +0000 UTC m=+0.139612864 container start 6d7c303463f92ed51ec0d120bcf71f4dc1a2f279a809c2cfcd0790db62e25a87 (image=quay.io/ceph/haproxy:2.3, name=confident_driscoll)
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.989615199Z level=info msg="Executing migration" id="Update preferences table charset"
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.989668221Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=53.722µs
Oct  8 05:47:23 np0005475493 confident_driscoll[98304]: 0 0
Oct  8 05:47:23 np0005475493 systemd[1]: libpod-6d7c303463f92ed51ec0d120bcf71f4dc1a2f279a809c2cfcd0790db62e25a87.scope: Deactivated successfully.
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.99535091Z level=info msg="Executing migration" id="Add column team_id in preferences"
Oct  8 05:47:23 np0005475493 podman[98281]: 2025-10-08 09:47:23.995925539 +0000 UTC m=+0.146920325 container attach 6d7c303463f92ed51ec0d120bcf71f4dc1a2f279a809c2cfcd0790db62e25a87 (image=quay.io/ceph/haproxy:2.3, name=confident_driscoll)
Oct  8 05:47:23 np0005475493 podman[98281]: 2025-10-08 09:47:23.996986512 +0000 UTC m=+0.147981298 container died 6d7c303463f92ed51ec0d120bcf71f4dc1a2f279a809c2cfcd0790db62e25a87 (image=quay.io/ceph/haproxy:2.3, name=confident_driscoll)
Oct  8 05:47:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:23.998327534Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.975984ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.002235018Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.002457375Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=220.857µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.005284774Z level=info msg="Executing migration" id="Add column week_start in preferences"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.007610727Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.325843ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.009710013Z level=info msg="Executing migration" id="Add column preferences.json_data"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.01215794Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.448107ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.016545319Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.016824818Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=282.349µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.01877754Z level=info msg="Executing migration" id="Add preferences index org_id"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.019623967Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=846.327µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.021925439Z level=info msg="Executing migration" id="Add preferences index user_id"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.0229143Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=988.831µs
Oct  8 05:47:24 np0005475493 systemd[1]: var-lib-containers-storage-overlay-15a299666df75208b19bc13f287e74f6c95d5000aabd8ff9935fbeca52106f85-merged.mount: Deactivated successfully.
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.025949335Z level=info msg="Executing migration" id="create alert table v1"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.027009719Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.060474ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.029987563Z level=info msg="Executing migration" id="add index alert org_id & id "
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.030971715Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=984.311µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.03464274Z level=info msg="Executing migration" id="add index alert state"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.035615891Z level=info msg="Migration successfully executed" id="add index alert state" duration=971.641µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.039574326Z level=info msg="Executing migration" id="add index alert dashboard_id"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.040525835Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=951.719µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.042485848Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Oct  8 05:47:24 np0005475493 podman[98281]: 2025-10-08 09:47:24.042800327 +0000 UTC m=+0.193795113 container remove 6d7c303463f92ed51ec0d120bcf71f4dc1a2f279a809c2cfcd0790db62e25a87 (image=quay.io/ceph/haproxy:2.3, name=confident_driscoll)
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.043590922Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.104574ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.045149712Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.046084201Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=934.179µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.047735103Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.048747205Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.011982ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.051289995Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Oct  8 05:47:24 np0005475493 systemd[1]: libpod-conmon-6d7c303463f92ed51ec0d120bcf71f4dc1a2f279a809c2cfcd0790db62e25a87.scope: Deactivated successfully.
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.060060751Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=8.768556ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.062388625Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.06319489Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=806.875µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.065501193Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Oct  8 05:47:24 np0005475493 podman[98144]: 2025-10-08 09:47:24.064338077 +0000 UTC m=+0.515620926 container remove dc5c510f959ee980d665744efef7a9a3cccbb12affceccac875af2186e9116cb (image=quay.io/ceph/ceph:v19, name=busy_black, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.066539066Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.037603ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.068301851Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.068695374Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=393.293µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.070310475Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.070925934Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=615.259µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.072956799Z level=info msg="Executing migration" id="create alert_notification table v1"
Oct  8 05:47:24 np0005475493 systemd[1]: libpod-conmon-dc5c510f959ee980d665744efef7a9a3cccbb12affceccac875af2186e9116cb.scope: Deactivated successfully.
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.073778434Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=821.145µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.075572111Z level=info msg="Executing migration" id="Add column is_default"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.078355479Z level=info msg="Migration successfully executed" id="Add column is_default" duration=2.782868ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.080135835Z level=info msg="Executing migration" id="Add column frequency"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.082919073Z level=info msg="Migration successfully executed" id="Add column frequency" duration=2.782658ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.084769671Z level=info msg="Executing migration" id="Add column send_reminder"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.088663924Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.892733ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.090594685Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.093876408Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.281394ms
Oct  8 05:47:24 np0005475493 systemd[1]: Reloading.
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.095984164Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.097138251Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.153697ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.09932626Z level=info msg="Executing migration" id="Update alert table charset"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.099477355Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=151.855µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.101253252Z level=info msg="Executing migration" id="Update alert_notification table charset"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.101397456Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=146.585µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.103202343Z level=info msg="Executing migration" id="create notification_journal table v1"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.104002378Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=799.385µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.106398294Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.107697964Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.30166ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.109733699Z level=info msg="Executing migration" id="drop alert_notification_journal"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.110886675Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.152576ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.112766284Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.113832988Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.066244ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.115940254Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.117163703Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.222709ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.118995071Z level=info msg="Executing migration" id="Add for to alert table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.123937546Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.941556ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.127066405Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.132201258Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=5.131433ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.135609645Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.135826351Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=217.847µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.13800915Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.139276591Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.26678ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.141108068Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.142131001Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.022913ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.144569127Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.148827142Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=4.256745ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.151147345Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.151228198Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=81.753µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.153424747Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.154588264Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.163137ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.156520645Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.157951219Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.430004ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.160320834Z level=info msg="Executing migration" id="Drop old annotation table v4"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.160439378Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=119.414µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.164378772Z level=info msg="Executing migration" id="create annotation table v5"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.165683964Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.303252ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.167975976Z level=info msg="Executing migration" id="add index annotation 0 v3"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.169283507Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.308291ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.17125625Z level=info msg="Executing migration" id="add index annotation 1 v3"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.172394625Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.138455ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.174433869Z level=info msg="Executing migration" id="add index annotation 2 v3"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.175554035Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.122176ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.177657161Z level=info msg="Executing migration" id="add index annotation 3 v3"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.178935602Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.278051ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.181151121Z level=info msg="Executing migration" id="add index annotation 4 v3"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.182481484Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.330303ms
Oct  8 05:47:24 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:47:24 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.189486974Z level=info msg="Executing migration" id="Update annotation table charset"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.189554906Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=71.802µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.192451788Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.202393961Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=9.939393ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.205284272Z level=info msg="Executing migration" id="Drop category_id index"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.208556586Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=3.272163ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.211146947Z level=info msg="Executing migration" id="Add column tags to annotation table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.21916761Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=8.020423ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.221514585Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.222898018Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=1.383613ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.225387747Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.226956977Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.56873ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.229801016Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.231498319Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.697553ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.233968007Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.251737278Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=17.769671ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.253855595Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.254609339Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=751.514µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.256323202Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.257224321Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=900.219µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.259118711Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.259443261Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=326.36µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.261191896Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.261833136Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=640.89µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.264867802Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.265070868Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=203.246µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.266896396Z level=info msg="Executing migration" id="Add created time to annotation table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.270716187Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=3.81944ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.272439591Z level=info msg="Executing migration" id="Add updated time to annotation table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.276254401Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.813811ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.278088809Z level=info msg="Executing migration" id="Add index for created in annotation table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.279282967Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.193328ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.281346022Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.282155708Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=809.465µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.284191811Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.284417929Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=226.648µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.286266507Z level=info msg="Executing migration" id="Add epoch_end column"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.289328793Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=3.061446ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.291167212Z level=info msg="Executing migration" id="Add index for epoch_end"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.291891874Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=724.232µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.293612699Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.293768114Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=150.165µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.295408245Z level=info msg="Executing migration" id="Move region to single row"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.295720865Z level=info msg="Migration successfully executed" id="Move region to single row" duration=312.57µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.297657186Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.298488092Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=830.856µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.300226597Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.30095851Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=731.323µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.302747167Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.30348876Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=738.783µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.305263577Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.305998779Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=734.992µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.308017224Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.308927391Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=909.818µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.310689388Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.311538934Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=851.316µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.313358692Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.313432574Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=73.902µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.315087516Z level=info msg="Executing migration" id="create test_data table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.315724966Z level=info msg="Migration successfully executed" id="create test_data table" duration=637.29µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.317261224Z level=info msg="Executing migration" id="create dashboard_version table v1"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.317958557Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=697.173µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.319582538Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.320399823Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=817.185µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.321905231Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.322685335Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=779.724µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.324427221Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.324591326Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=164.125µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.326414313Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.326743384Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=328.751µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.328443657Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.32852886Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=85.113µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.33042948Z level=info msg="Executing migration" id="create team table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.331026499Z level=info msg="Migration successfully executed" id="create team table" duration=596.638µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.332888068Z level=info msg="Executing migration" id="add index team.org_id"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.334082125Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.192417ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.336014127Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.336907814Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=893.497µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.338729862Z level=info msg="Executing migration" id="Add column uid in team"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.342314945Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=3.584652ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.344009969Z level=info msg="Executing migration" id="Update uid column values in team"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.344215005Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=205.106µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.346085914Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.347076895Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=991.111µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.348772729Z level=info msg="Executing migration" id="create team member table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.349502952Z level=info msg="Migration successfully executed" id="create team member table" duration=729.623µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.35323828Z level=info msg="Executing migration" id="add index team_member.org_id"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.354068056Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=830.116µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.35576109Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.356729389Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=967.159µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.35830434Z level=info msg="Executing migration" id="add index team_member.team_id"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.359246029Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=939.889µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.361314325Z level=info msg="Executing migration" id="Add column email to team table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.366669134Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=5.351018ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.368867153Z level=info msg="Executing migration" id="Add column external to team_member table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.372497697Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.630384ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.374617824Z level=info msg="Executing migration" id="Add column permission to team_member table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.378195847Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=3.576693ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.380121328Z level=info msg="Executing migration" id="create dashboard acl table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.380999946Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=878.638µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.382794362Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.383712401Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=917.689µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.386100516Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.387164419Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.063573ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.389420951Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.390609858Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.188537ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.392594391Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.393495029Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=900.889µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.395060969Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.395848273Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=787.434µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.397602049Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.398408164Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=805.705µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.399946373Z level=info msg="Executing migration" id="add index dashboard_permission"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.400720008Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=773.535µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.40240217Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.402846125Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=443.395µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.404382833Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.404585699Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=202.996µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.406137349Z level=info msg="Executing migration" id="create tag table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.406749407Z level=info msg="Migration successfully executed" id="create tag table" duration=613.559µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.408267356Z level=info msg="Executing migration" id="add index tag.key_value"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.408980478Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=712.792µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.410559578Z level=info msg="Executing migration" id="create login attempt table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.411155337Z level=info msg="Migration successfully executed" id="create login attempt table" duration=595.339µs
Oct  8 05:47:24 np0005475493 systemd[1]: Reloading.
Oct  8 05:47:24 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.414320707Z level=info msg="Executing migration" id="add index login_attempt.username"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.415240045Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=918.218µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.417091464Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.418278511Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.186487ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.420205642Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Oct  8 05:47:24 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.430182467Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=9.975545ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.431941802Z level=info msg="Executing migration" id="create login_attempt v2"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.432687696Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=745.993µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.434863664Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.435598247Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=734.523µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.437290691Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.43757843Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=287.799µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.439269174Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.439803511Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=533.747µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.441643728Z level=info msg="Executing migration" id="create user auth table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.442449164Z level=info msg="Migration successfully executed" id="create user auth table" duration=805.826µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.444573841Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.445527491Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=953.25µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.447384489Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.447461522Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=79.063µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.449368842Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.453381939Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=4.011027ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.455845276Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.459931485Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=4.084699ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.461847066Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.466004946Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=4.15624ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.468367281Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.473539905Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.173494ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.477019154Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.477870141Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=851.217µs
Oct  8 05:47:24 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.481343211Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.485533432Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=4.189161ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.48798256Z level=info msg="Executing migration" id="create server_lock table"
Oct  8 05:47:24 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.490743717Z level=info msg="Migration successfully executed" id="create server_lock table" duration=2.754847ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.493279638Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.49402397Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=745.033µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.497353445Z level=info msg="Executing migration" id="create user auth token table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.498197132Z level=info msg="Migration successfully executed" id="create user auth token table" duration=843.317µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.50035453Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.501028682Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=673.882µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.502931681Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.503681955Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=747.574µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.505326057Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.506136123Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=809.936µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.507917028Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.51175501Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=3.837462ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.51336742Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.514083863Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=716.773µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.515578321Z level=info msg="Executing migration" id="create cache_data table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.516268042Z level=info msg="Migration successfully executed" id="create cache_data table" duration=689.491µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.518177593Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.518888584Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=713.131µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.520657611Z level=info msg="Executing migration" id="create short_url table v1"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.521443835Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=787.624µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.525532694Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.526306509Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=773.495µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.527604999Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.527650551Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=46.462µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.529219651Z level=info msg="Executing migration" id="delete alert_definition table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.529292383Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=72.952µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.531034288Z level=info msg="Executing migration" id="recreate alert_definition table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.531800722Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=768.574µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.534804177Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.535786857Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=982.26µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.537669988Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.538420471Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=750.282µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.540080593Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.540125475Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=45.342µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.541785497Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.542526351Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=740.693µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.544247294Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.544934746Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=687.122µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.546536317Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.547454466Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=917.729µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.549124409Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.549987585Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=863.176µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.551812764Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.556052547Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=4.224072ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.55930769Z level=info msg="Executing migration" id="drop alert_definition table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.560154437Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=846.837µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.561679454Z level=info msg="Executing migration" id="delete alert_definition_version table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.561744157Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=64.983µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.563623205Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.564469543Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=846.207µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.567135317Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.567921642Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=785.915µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.569611715Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.570392789Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=780.844µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.572151015Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.572197246Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=46.321µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.574808878Z level=info msg="Executing migration" id="drop alert_definition_version table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.575657625Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=847.997µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.577188724Z level=info msg="Executing migration" id="create alert_instance table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.577873725Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=684.681µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.579436305Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.580181148Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=745.113µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.582362927Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.583282196Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=918.609µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.584891657Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.589896615Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.001617ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.591540216Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.592312141Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=772.005µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.594142708Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.594856451Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=715.963µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.59705609Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.61889701Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=21.855251ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.623883757Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.643899098Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=20.013131ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.645741486Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.646608904Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=868.048µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.648058609Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.648745911Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=687.512µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.650291669Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.654271635Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=3.979396ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.656560197Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.660699948Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.141631ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.663711273Z level=info msg="Executing migration" id="create alert_rule table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.664628652Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=919.879µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.666810131Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.667664128Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=856.616µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.669591929Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.670542948Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=954.259µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.672301324Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.67314262Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=841.276µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.675535676Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.675597468Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=62.472µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.679588454Z level=info msg="Executing migration" id="add column for to alert_rule"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.683986642Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=4.397578ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.687494303Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Oct  8 05:47:24 np0005475493 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.zadvee for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.698411557Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=10.917214ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.701291058Z level=info msg="Executing migration" id="add column labels to alert_rule"
Oct  8 05:47:24 np0005475493 python3[98435]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.711641875Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=10.350247ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.713937998Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.715746734Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.806126ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.718646096Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.720534675Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.887289ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.722838959Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.732306737Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=9.461308ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.734706292Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Oct  8 05:47:24 np0005475493 ceph-mon[73572]: Deploying daemon haproxy.rgw.default.compute-0.zadvee on compute-0
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.751607955Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=16.897423ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.755006903Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.75713261Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=2.126276ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.760265639Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.769698476Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=9.432108ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.772018989Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.778308848Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.289419ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.780474136Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.780538018Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=64.762µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.783390658Z level=info msg="Executing migration" id="create alert_rule_version table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.786392163Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=3.003545ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.789323766Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.791821034Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=2.496029ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.795918533Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.798299719Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=2.379646ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.801353704Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.801667074Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=311.02µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.804749432Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.810950827Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.196256ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.812964191Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.818915909Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=5.951097ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.82085306Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.826872309Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.018869ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.828744079Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.834727928Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=5.983349ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.836756402Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.842891615Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.133923ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.845122736Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.845307251Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=184.766µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.847353786Z level=info msg="Executing migration" id=create_alert_configuration_table
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.848330927Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=976.681µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.850313199Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.856412822Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.098723ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.858560399Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Oct  8 05:47:24 np0005475493 ceph-mgr[73869]: [dashboard INFO request] [192.168.122.100:40840] [GET] [200] [0.116s] [6.3K] [75ce02be-8930-488b-9b75-1f211d459076] /
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.858741995Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=181.736µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.860653435Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.867678627Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=7.021562ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.869897947Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.870963711Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.068744ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.872993985Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.879721296Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.725061ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.88171395Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.882402222Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=689.102µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.884839208Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.885582371Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=743.273µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.887096839Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.891464887Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=4.367838ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.89313686Z level=info msg="Executing migration" id="create provenance_type table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:24 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614003f10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.893771459Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=640.679µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.895312849Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.896115604Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=802.845µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.897781476Z level=info msg="Executing migration" id="create alert_image table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.898472858Z level=info msg="Migration successfully executed" id="create alert_image table" duration=691.522µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.899997166Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.900767601Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=769.995µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.902250377Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.90233373Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=83.603µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.903888119Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.904666184Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=778.045µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.90612628Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.906901024Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=774.754µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.90836996Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.90868149Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.910810237Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.911302153Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=491.736µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.913003807Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.913941856Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=937.999µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.915733163Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.922006Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.267527ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.924602832Z level=info msg="Executing migration" id="create library_element table v1"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.925995206Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.390334ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.928118653Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.929521308Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.399485ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.931216671Z level=info msg="Executing migration" id="create library_element_connection table v1"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.932180202Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=963.741µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.93561209Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.936745685Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.133515ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.938543462Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.939705229Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.161207ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.941456954Z level=info msg="Executing migration" id="increase max description length to 2048"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.941488885Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=32.811µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.943828329Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.943910122Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=82.183µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.946368699Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.94669238Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=323.501µs
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.948633601Z level=info msg="Executing migration" id="create data_keys table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.949741455Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.108505ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.95177663Z level=info msg="Executing migration" id="create secrets table"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.952649157Z level=info msg="Migration successfully executed" id="create secrets table" duration=872.427µs
Oct  8 05:47:24 np0005475493 podman[98488]: 2025-10-08 09:47:24.955646491 +0000 UTC m=+0.050198374 container create 8c1b83f4045183ce85a8a1c015c338bfd7d7cdf70eee65cf19c9e353ede24b18 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-rgw-default-compute-0-zadvee)
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.95590603Z level=info msg="Executing migration" id="rename data_keys name column to id"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.989968384Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=34.058624ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.991569845Z level=info msg="Executing migration" id="add name column into data_keys"
Oct  8 05:47:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb94876679d3a90106e8c2f3621edec27290aaed4850928ee71a79ddaebfd34b/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.996422238Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=4.852193ms
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.999732063Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Oct  8 05:47:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:24.999841916Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=110.203µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.001475717Z level=info msg="Executing migration" id="rename data_keys name column to label"
Oct  8 05:47:25 np0005475493 podman[98488]: 2025-10-08 09:47:25.00662733 +0000 UTC m=+0.101179303 container init 8c1b83f4045183ce85a8a1c015c338bfd7d7cdf70eee65cf19c9e353ede24b18 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-rgw-default-compute-0-zadvee)
Oct  8 05:47:25 np0005475493 podman[98488]: 2025-10-08 09:47:25.012709482 +0000 UTC m=+0.107261395 container start 8c1b83f4045183ce85a8a1c015c338bfd7d7cdf70eee65cf19c9e353ede24b18 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-rgw-default-compute-0-zadvee)
Oct  8 05:47:25 np0005475493 bash[98488]: 8c1b83f4045183ce85a8a1c015c338bfd7d7cdf70eee65cf19c9e353ede24b18
Oct  8 05:47:25 np0005475493 podman[98488]: 2025-10-08 09:47:24.932674108 +0000 UTC m=+0.027225991 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-rgw-default-compute-0-zadvee[98503]: [NOTICE] 280/094725 (2) : New worker #1 (4) forked
Oct  8 05:47:25 np0005475493 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.zadvee for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:47:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.026977292Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=25.496845ms
Oct  8 05:47:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:47:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:25.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.028871971Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.05674708Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=27.884419ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.058566149Z level=info msg="Executing migration" id="create kv_store table v1"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.059546749Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=980.201µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.061730097Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.062677128Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=946.151µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.064300029Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.064469654Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=170.575µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.066067575Z level=info msg="Executing migration" id="create permission table"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.066828869Z level=info msg="Migration successfully executed" id="create permission table" duration=761.364µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.068727849Z level=info msg="Executing migration" id="add unique index permission.role_id"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.069510274Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=782.795µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.071125564Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.071985622Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=860.088µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.073486228Z level=info msg="Executing migration" id="create role table"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.074279124Z level=info msg="Migration successfully executed" id="create role table" duration=792.526µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.075680727Z level=info msg="Executing migration" id="add column display_name"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.081804181Z level=info msg="Migration successfully executed" id="add column display_name" duration=6.121274ms
Oct  8 05:47:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.083580777Z level=info msg="Executing migration" id="add column group_name"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.088803722Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.221275ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.090512576Z level=info msg="Executing migration" id="add index role.org_id"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.091439415Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=926.439µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.093764778Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.09476582Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.000492ms
Oct  8 05:47:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.096396141Z level=info msg="Executing migration" id="add index role_org_id_uid"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.097439975Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.040713ms
Oct  8 05:47:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.099674015Z level=info msg="Executing migration" id="create team role table"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.100685787Z level=info msg="Migration successfully executed" id="create team role table" duration=1.011922ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.104121755Z level=info msg="Executing migration" id="add index team_role.org_id"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.105076075Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=951.41µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.108017768Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.109605928Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.58731ms
Oct  8 05:47:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.112904122Z level=info msg="Executing migration" id="add index team_role.team_id"
Oct  8 05:47:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.11410555Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.204548ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.115678389Z level=info msg="Executing migration" id="create user role table"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.116472935Z level=info msg="Migration successfully executed" id="create user role table" duration=794.396µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.118217569Z level=info msg="Executing migration" id="add index user_role.org_id"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.120367958Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=2.148269ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.123531497Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.126119019Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=2.586561ms
Oct  8 05:47:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.128257577Z level=info msg="Executing migration" id="add index user_role.user_id"
Oct  8 05:47:25 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.frbwni on compute-2
Oct  8 05:47:25 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.frbwni on compute-2
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.130254199Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.996522ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.133416789Z level=info msg="Executing migration" id="create builtin role table"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.134902566Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.486117ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.137212509Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.139168671Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.953422ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.141177844Z level=info msg="Executing migration" id="add index builtin_role.name"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.143378383Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=2.201189ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.147149262Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.161410002Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=14.26168ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.163748816Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.165263073Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.514177ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.166903496Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.168272108Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.367982ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.171201681Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.172460831Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.25794ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.174178415Z level=info msg="Executing migration" id="add unique index role.uid"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.175368863Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.191998ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.177093557Z level=info msg="Executing migration" id="create seed assignment table"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.177906503Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=814.185µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.180787883Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.181997842Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.209499ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.186314458Z level=info msg="Executing migration" id="add column hidden to role table"
Oct  8 05:47:25 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v50: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.196313603Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=9.996135ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.198290466Z level=info msg="Executing migration" id="permission kind migration"
Oct  8 05:47:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  8 05:47:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  8 05:47:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  8 05:47:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  8 05:47:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  8 05:47:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  8 05:47:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Oct  8 05:47:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  8 05:47:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  8 05:47:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.209228481Z level=info msg="Migration successfully executed" id="permission kind migration" duration=10.933875ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.211891615Z level=info msg="Executing migration" id="permission attribute migration"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.217674987Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.781602ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.219723272Z level=info msg="Executing migration" id="permission identifier migration"
Oct  8 05:47:25 np0005475493 python3[98541]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.227478696Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=7.746705ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.229378767Z level=info msg="Executing migration" id="add permission identifier index"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.23046994Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.090384ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.232133423Z level=info msg="Executing migration" id="add permission action scope role_id index"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.233457145Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.322982ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.235247801Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.236593274Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.345853ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:25 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66000016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.238268426Z level=info msg="Executing migration" id="create query_history table v1"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.239114383Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=846.887µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.240647802Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.241605242Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=956.129µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.243096919Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.243160781Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=59.632µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.245209446Z level=info msg="Executing migration" id="rbac disabled migrator"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.245253657Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=42.142µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.247058593Z level=info msg="Executing migration" id="teams permissions migration"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.247621751Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=564.198µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.249238752Z level=info msg="Executing migration" id="dashboard permissions"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.249846221Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=608.579µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.251404151Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.251934688Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=530.637µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.253392714Z level=info msg="Executing migration" id="drop managed folder create actions"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.253542158Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=149.694µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.254849679Z level=info msg="Executing migration" id="alerting notification permissions"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.255370846Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=521.297µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.256785201Z level=info msg="Executing migration" id="create query_history_star table v1"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.257477873Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=692.562µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.259169716Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.260306282Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.117015ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.262636105Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Oct  8 05:47:25 np0005475493 ceph-mgr[73869]: [dashboard INFO request] [192.168.122.100:40848] [GET] [200] [0.002s] [6.3K] [aa7dbdf5-1f88-42de-8182-57c808bf6b3b] /
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.268756749Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=6.119654ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.270651698Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.270727861Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=76.863µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.27231539Z level=info msg="Executing migration" id="create correlation table v1"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.273164417Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=846.197µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.274783438Z level=info msg="Executing migration" id="add index correlations.uid"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.275628815Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=844.997µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.277171554Z level=info msg="Executing migration" id="add index correlations.source_uid"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.27800655Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=835.056µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.280015094Z level=info msg="Executing migration" id="add correlation config column"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.285860387Z level=info msg="Migration successfully executed" id="add correlation config column" duration=5.844443ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.287647564Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.288527591Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=879.787µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.290606978Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.293571801Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=2.965603ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.295974607Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.336605558Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=40.625791ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.339351485Z level=info msg="Executing migration" id="create correlation v2"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.34077914Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.427895ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.342722151Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.343899489Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.176848ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.34583247Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.347166162Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.333843ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.34964765Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.35156039Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.91337ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.353918274Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.354383849Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=469.575µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.356399693Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.35788379Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.483877ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.364753656Z level=info msg="Executing migration" id="add provisioning column"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.37724039Z level=info msg="Migration successfully executed" id="add provisioning column" duration=12.481844ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.389074344Z level=info msg="Executing migration" id="create entity_events table"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.390196139Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.129045ms
Oct  8 05:47:25 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Oct  8 05:47:25 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:25 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f80018b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.455128297Z level=info msg="Executing migration" id="create dashboard public config v1"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.457711889Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=2.586632ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.475119588Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.475680756Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.495270344Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.496294035Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.518809676Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.521102608Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=2.295273ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.578274542Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.580778421Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=2.506198ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.585415927Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.587739021Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=2.323255ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.590666093Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.592836781Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=2.169998ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.5953265Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.59724933Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.92111ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.604550031Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.606938766Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=2.389295ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.608996541Z level=info msg="Executing migration" id="Drop public config table"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.609837688Z level=info msg="Migration successfully executed" id="Drop public config table" duration=843.546µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.611635294Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.612530203Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=895.008µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.614394541Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.615904249Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.509628ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.618597524Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.619822602Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.224918ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.622378913Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.623891231Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.507888ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.626121491Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.657243583Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=31.098741ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.711365969Z level=info msg="Executing migration" id="add annotations_enabled column"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.725448183Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=14.080954ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.727584942Z level=info msg="Executing migration" id="add time_selection_enabled column"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.736991068Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=9.405396ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.739078354Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.739347152Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=268.358µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.741306874Z level=info msg="Executing migration" id="add share column"
Oct  8 05:47:25 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:25 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:25 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:25 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  8 05:47:25 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  8 05:47:25 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  8 05:47:25 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  8 05:47:25 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.751270278Z level=info msg="Migration successfully executed" id="add share column" duration=9.961934ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.753069726Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.753321493Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=285.379µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.788891526Z level=info msg="Executing migration" id="create file table"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.790203226Z level=info msg="Migration successfully executed" id="create file table" duration=1.311831ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.79187708Z level=info msg="Executing migration" id="file table idx: path natural pk"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.792745766Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=869.186µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.794560734Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.79539873Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=838.006µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.797474406Z level=info msg="Executing migration" id="create file_meta table"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.79857309Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.098694ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.800208832Z level=info msg="Executing migration" id="file table idx: path key"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.801495052Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.28496ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.803370602Z level=info msg="Executing migration" id="set path collation in file table"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.803439174Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=69.202µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.805545721Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.805607593Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=62.562µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.809308779Z level=info msg="Executing migration" id="managed permissions migration"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.809892177Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=584.348µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.811681024Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.811896751Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=216.067µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.813860983Z level=info msg="Executing migration" id="RBAC action name migrator"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.814928346Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.065813ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.849884229Z level=info msg="Executing migration" id="Add UID column to playlist"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.859476782Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.595053ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.887250018Z level=info msg="Executing migration" id="Update uid column values in playlist"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.887409693Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=161.585µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.889286642Z level=info msg="Executing migration" id="Add index for uid in playlist"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.890842021Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.553698ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.892847255Z level=info msg="Executing migration" id="update group index for alert rules"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.893303838Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=458.094µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.89622001Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.896478198Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=258.788µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.898440201Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.898946266Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=506.595µs
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.900901878Z level=info msg="Executing migration" id="add action column to seed_assignment"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.9104816Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.577352ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.912489324Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.923092048Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=10.601784ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.92506494Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.926506006Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.440646ms
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:25.931423341Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Oct  8 05:47:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[97382]: ts=2025-10-08T09:47:25.998Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003404702s
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.017824526Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=86.397225ms
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.020814691Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.021803752Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=988.091µs
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.061804583Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.06296092Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.157987ms
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.064670914Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.090421206Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=25.745842ms
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.097922583Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.104448659Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.523566ms
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.106659298Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.106928307Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=268.739µs
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.110092307Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.110245372Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=153.155µs
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.118027197Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.118196302Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=169.315µs
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.121841508Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.121990852Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=149.184µs
Oct  8 05:47:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.167664012Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.167835218Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=171.836µs
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.16978578Z level=info msg="Executing migration" id="create folder table"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.170583414Z level=info msg="Migration successfully executed" id="create folder table" duration=798.924µs
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.173363472Z level=info msg="Executing migration" id="Add index for parent_uid"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.174371974Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.009682ms
Oct  8 05:47:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  8 05:47:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  8 05:47:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  8 05:47:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  8 05:47:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  8 05:47:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.178451573Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.179337961Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=886.278µs
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.181605312Z level=info msg="Executing migration" id="Update folder title length"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.181624143Z level=info msg="Migration successfully executed" id="Update folder title length" duration=18.951µs
Oct  8 05:47:26 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.184462322Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.185361611Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=901.019µs
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.193242399Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.194285742Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.044513ms
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[12.19( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.196344227Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[12.1c( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[10.1b( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.197343849Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=997.662µs
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[10.18( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[10.19( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[10.5( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[10.2( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[12.8( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[12.a( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[12.e( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[10.8( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[12.c( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[12.b( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[12.6( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[10.13( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[10.15( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[10.14( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[12.12( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[12.10( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.14( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040661812s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.314361572s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.14( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040636063s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.314361572s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.16( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065840721s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.340148926s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.16( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065819740s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.340148926s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.14( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066782951s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.341278076s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.16( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040081978s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.314575195s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.17( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065618515s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.340118408s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.14( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066769600s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.341278076s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.17( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065597534s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.340118408s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.16( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040049553s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.314575195s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.17( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040047646s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.314666748s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.17( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040028572s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.314666748s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.13( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066614151s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.341293335s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.13( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066596031s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.341293335s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.10( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039962769s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.314682007s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.11( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039947510s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.314712524s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.11( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039935112s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.314712524s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.1( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066735268s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.341583252s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.1( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066510201s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.341583252s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.12( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066233635s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.341293335s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.2( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039722443s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.314804077s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.2( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039710045s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.314804077s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.12( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066205025s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.341293335s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.3( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040011406s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.315200806s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.3( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039994240s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.315200806s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.f( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039584160s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.314865112s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.f( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039558411s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.314865112s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.15( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039474487s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.314620972s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.10( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039942741s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.314682007s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.15( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039208412s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.314620972s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.8( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039422035s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.314910889s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.8( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039410591s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.314910889s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.a( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065825462s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.341430664s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.a( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065814972s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.341430664s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.a( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039337158s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.315063477s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.a( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039321899s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.315063477s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.9( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039178848s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.314941406s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.9( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039118767s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.314941406s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.d( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039288521s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.315170288s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.d( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039273262s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.315170288s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.e( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065698624s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.341613770s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.e( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065679550s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.341613770s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.f( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065616608s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.341629028s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.f( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065602303s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.341629028s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.c( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039060593s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.315246582s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.c( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039015770s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.315246582s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.8( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065329552s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.341644287s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.8( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065310478s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.341644287s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.b( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.038849831s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.315292358s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.b( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.038825035s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.315292358s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.3( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065237045s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.341781616s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.3( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.065219879s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.341781616s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.4( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.067255020s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.343902588s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.6( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040740967s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.317459106s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.6( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040719032s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.317459106s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.5( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.067422867s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.344146729s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.4( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.067235947s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.343902588s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.5( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.067331314s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.344146729s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.5( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040680885s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.317520142s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.5( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040670395s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.317520142s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.4( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040611267s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.317596436s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.4( v 37'12 (0'0,37'12] local-lis/les=54/56 n=1 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040597916s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.317596436s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.7( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.067056656s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.344146729s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.19( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.067019463s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.344161987s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.19( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066965103s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.344161987s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.1b( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040418625s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.317642212s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.1a( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066916466s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.344223022s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.1b( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040397644s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.317642212s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.1a( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066900253s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.344223022s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.19( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040371895s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.317779541s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.19( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040351868s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.317779541s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.1b( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.067070961s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.344528198s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.1b( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.067058563s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.344528198s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.18( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040277481s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.317794800s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.18( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040252686s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.317794800s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.1c( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066783905s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.344360352s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.1f( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040222168s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.317825317s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.1c( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066760063s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.344360352s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.1f( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040188789s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.317825317s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.7( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.067039490s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.344146729s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.1e( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066552162s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.344390869s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.1d( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066520691s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 184.344360352s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.1e( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066532135s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.344390869s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[11.1d( empty local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=10.066502571s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 184.344360352s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.1c( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.040002823s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.317947388s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.1c( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039972305s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.317947388s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.206341713Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.12( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039451599s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 active pruub 182.318054199s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 59 pg[8.12( v 37'12 (0'0,37'12] local-lis/les=54/56 n=0 ec=54/36 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=8.039312363s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=37'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.318054199s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.207255241Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=916.098µs
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.208877973Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.209245955Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=369.032µs
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.212174707Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.213568561Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.395384ms
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.216491873Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.217632889Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.140476ms
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.248079909Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.249956078Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.879289ms
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.254361217Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.25572885Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.367633ms
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.258186228Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.259345055Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.154947ms
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.262331939Z level=info msg="Executing migration" id="create anon_device table"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.263190966Z level=info msg="Migration successfully executed" id="create anon_device table" duration=859.137µs
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.264945712Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.265965743Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.018321ms
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.268288036Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.269212246Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=924.25µs
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.271985303Z level=info msg="Executing migration" id="create signing_key table"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.272880952Z level=info msg="Migration successfully executed" id="create signing_key table" duration=896.719µs
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.275350379Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.276331581Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=984.282µs
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.278386355Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.279597263Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.211108ms
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.283055012Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.283408864Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=354.781µs
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.285578452Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.292873002Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=7.29369ms
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.295251167Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.295893628Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=643.011µs
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.297891091Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.29882406Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=934.86µs
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.30202831Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.303132226Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.103596ms
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.304968713Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.306004106Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.035003ms
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.310128647Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.310975693Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=846.696µs
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.314883907Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.315713613Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=829.156µs
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.321505935Z level=info msg="Executing migration" id="create sso_setting table"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.322354952Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=846.807µs
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.324153089Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.324710426Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=557.847µs
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.326150812Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.326372189Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=220.387µs
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.328855357Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.328898389Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=43.292µs
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.332345838Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.33847747Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=6.131082ms
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.342853399Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.349083346Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=6.228916ms
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.35050022Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.350756548Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=256.128µs
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=migrator t=2025-10-08T09:47:26.35273069Z level=info msg="migrations completed" performed=547 skipped=0 duration=2.986961619s
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=sqlstore t=2025-10-08T09:47:26.354074573Z level=info msg="Created default organization"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=secrets t=2025-10-08T09:47:26.356152108Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=plugin.store t=2025-10-08T09:47:26.387807417Z level=info msg="Loading plugins..."
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Oct  8 05:47:26 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=local.finder t=2025-10-08T09:47:26.484574479Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=plugin.store t=2025-10-08T09:47:26.48460475Z level=info msg="Plugins loaded" count=55 duration=96.800693ms
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=query_data t=2025-10-08T09:47:26.487072277Z level=info msg="Query Service initialization"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=live.push_http t=2025-10-08T09:47:26.490690242Z level=info msg="Live Push Gateway initialization"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=ngalert.migration t=2025-10-08T09:47:26.494508423Z level=info msg=Starting
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=ngalert.migration t=2025-10-08T09:47:26.494970307Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=ngalert.migration orgID=1 t=2025-10-08T09:47:26.495505404Z level=info msg="Migrating alerts for organisation"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=ngalert.migration orgID=1 t=2025-10-08T09:47:26.496360871Z level=info msg="Alerts found to migrate" alerts=0
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=ngalert.migration t=2025-10-08T09:47:26.498626242Z level=info msg="Completed alerting migration"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=ngalert.state.manager t=2025-10-08T09:47:26.521273527Z level=info msg="Running in alternative execution of Error/NoData mode"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=infra.usagestats.collector t=2025-10-08T09:47:26.524170008Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=provisioning.datasources t=2025-10-08T09:47:26.525592623Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=provisioning.alerting t=2025-10-08T09:47:26.542124315Z level=info msg="starting to provision alerting"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=provisioning.alerting t=2025-10-08T09:47:26.542146295Z level=info msg="finished to provision alerting"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=grafanaStorageLogger t=2025-10-08T09:47:26.54230398Z level=info msg="Storage starting"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=ngalert.state.manager t=2025-10-08T09:47:26.542441985Z level=info msg="Warming state cache for startup"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=ngalert.multiorg.alertmanager t=2025-10-08T09:47:26.542730143Z level=info msg="Starting MultiOrg Alertmanager"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=http.server t=2025-10-08T09:47:26.54547192Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=http.server t=2025-10-08T09:47:26.545774179Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=ngalert.state.manager t=2025-10-08T09:47:26.571026476Z level=info msg="State cache has been initialized" states=0 duration=28.583521ms
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=ngalert.scheduler t=2025-10-08T09:47:26.571080917Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=ticker t=2025-10-08T09:47:26.571129409Z level=info msg=starting first_tick=2025-10-08T09:47:30Z
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=provisioning.dashboard t=2025-10-08T09:47:26.602572781Z level=info msg="starting to provision dashboards"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=plugins.update.checker t=2025-10-08T09:47:26.633367782Z level=info msg="Update check succeeded" duration=90.012019ms
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=grafana.update.checker t=2025-10-08T09:47:26.635334944Z level=info msg="Update check succeeded" duration=92.234189ms
Oct  8 05:47:26 np0005475493 ceph-mon[73572]: Deploying daemon haproxy.rgw.default.compute-2.frbwni on compute-2
Oct  8 05:47:26 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  8 05:47:26 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  8 05:47:26 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  8 05:47:26 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  8 05:47:26 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=provisioning.dashboard t=2025-10-08T09:47:26.828867099Z level=info msg="finished to provision dashboards"
Oct  8 05:47:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:26 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:26.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=grafana-apiserver t=2025-10-08T09:47:27.015629461Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Oct  8 05:47:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=grafana-apiserver t=2025-10-08T09:47:27.016029303Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Oct  8 05:47:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:27.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:27 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:47:27 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:27 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:47:27 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Oct  8 05:47:27 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:27 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v52: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:47:27 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Oct  8 05:47:27 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  8 05:47:27 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct  8 05:47:27 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Oct  8 05:47:27 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Oct  8 05:47:27 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[12.10( v 58'65 lc 51'45 (0'0,58'65] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=58'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:27 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614003f10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:27 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[10.15( v 58'57 lc 58'56 (0'0,58'57] local-lis/les=59/60 n=1 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=58'57 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:27 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[10.14( v 58'57 lc 58'56 (0'0,58'57] local-lis/les=59/60 n=1 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=58'57 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:27 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[10.13( v 41'48 (0'0,41'48] local-lis/les=59/60 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:27 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[12.6( v 51'62 lc 51'44 (0'0,51'62] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=51'62 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:27 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[12.b( v 51'62 (0'0,51'62] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=51'62 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:27 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[12.12( v 51'62 (0'0,51'62] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=51'62 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:27 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[12.c( v 51'62 (0'0,51'62] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=51'62 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:27 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[10.8( v 41'48 (0'0,41'48] local-lis/les=59/60 n=1 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:27 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[12.e( v 51'62 (0'0,51'62] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=51'62 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:27 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[12.8( v 51'62 (0'0,51'62] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=51'62 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:27 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[12.a( v 51'62 lc 0'0 (0'0,51'62] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=51'62 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:27 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[10.2( v 41'48 (0'0,41'48] local-lis/les=59/60 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:27 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[10.5( v 41'48 (0'0,41'48] local-lis/les=59/60 n=1 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:27 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[10.19( v 41'48 (0'0,41'48] local-lis/les=59/60 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:27 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[10.18( v 41'48 (0'0,41'48] local-lis/les=59/60 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:27 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[10.1b( v 41'48 (0'0,41'48] local-lis/les=59/60 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:27 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[12.1c( v 51'62 (0'0,51'62] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=51'62 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:27 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 60 pg[12.19( v 51'62 (0'0,51'62] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [1] r=0 lpr=59 pi=[57,59)/1 crt=51'62 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:27 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:27 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Oct  8 05:47:27 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:27 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  8 05:47:27 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  8 05:47:27 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  8 05:47:27 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  8 05:47:27 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.bphuep on compute-0
Oct  8 05:47:27 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.bphuep on compute-0
Oct  8 05:47:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:27 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66000016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:27 np0005475493 podman[98640]: 2025-10-08 09:47:27.865772437 +0000 UTC m=+0.067581753 container create 05fdabed716e35df2f8ffc4a9ab98d0571bb28422988baebe1a49a627aec1836 (image=quay.io/ceph/keepalived:2.2.4, name=elated_bartik, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, io.openshift.expose-services=, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, release=1793, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., description=keepalived for Ceph, com.redhat.component=keepalived-container)
Oct  8 05:47:27 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:27 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:27 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  8 05:47:27 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:27 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:27 np0005475493 podman[98640]: 2025-10-08 09:47:27.82085408 +0000 UTC m=+0.022663426 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct  8 05:47:27 np0005475493 systemd[1]: Started libpod-conmon-05fdabed716e35df2f8ffc4a9ab98d0571bb28422988baebe1a49a627aec1836.scope.
Oct  8 05:47:27 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:47:28 np0005475493 podman[98640]: 2025-10-08 09:47:28.028623324 +0000 UTC m=+0.230432740 container init 05fdabed716e35df2f8ffc4a9ab98d0571bb28422988baebe1a49a627aec1836 (image=quay.io/ceph/keepalived:2.2.4, name=elated_bartik, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, architecture=x86_64, io.openshift.tags=Ceph keepalived, version=2.2.4, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc.)
Oct  8 05:47:28 np0005475493 podman[98640]: 2025-10-08 09:47:28.04496084 +0000 UTC m=+0.246770166 container start 05fdabed716e35df2f8ffc4a9ab98d0571bb28422988baebe1a49a627aec1836 (image=quay.io/ceph/keepalived:2.2.4, name=elated_bartik, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vcs-type=git, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20)
Oct  8 05:47:28 np0005475493 elated_bartik[98656]: 0 0
Oct  8 05:47:28 np0005475493 systemd[1]: libpod-05fdabed716e35df2f8ffc4a9ab98d0571bb28422988baebe1a49a627aec1836.scope: Deactivated successfully.
Oct  8 05:47:28 np0005475493 podman[98640]: 2025-10-08 09:47:28.093177281 +0000 UTC m=+0.294986677 container attach 05fdabed716e35df2f8ffc4a9ab98d0571bb28422988baebe1a49a627aec1836 (image=quay.io/ceph/keepalived:2.2.4, name=elated_bartik, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, vcs-type=git, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, version=2.2.4, io.buildah.version=1.28.2, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64)
Oct  8 05:47:28 np0005475493 podman[98640]: 2025-10-08 09:47:28.093978805 +0000 UTC m=+0.295788161 container died 05fdabed716e35df2f8ffc4a9ab98d0571bb28422988baebe1a49a627aec1836 (image=quay.io/ceph/keepalived:2.2.4, name=elated_bartik, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.buildah.version=1.28.2, vcs-type=git, com.redhat.component=keepalived-container, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, name=keepalived, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, release=1793, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph)
Oct  8 05:47:28 np0005475493 systemd[1]: var-lib-containers-storage-overlay-3ee6b2075cb72eba111cf8b5dbede92b9c61f969f4a181378fd1ac67fa70d18f-merged.mount: Deactivated successfully.
Oct  8 05:47:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Oct  8 05:47:28 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  8 05:47:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Oct  8 05:47:28 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Oct  8 05:47:28 np0005475493 ceph-mgr[73869]: [progress INFO root] Writing back 23 completed events
Oct  8 05:47:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  8 05:47:28 np0005475493 podman[98640]: 2025-10-08 09:47:28.332306743 +0000 UTC m=+0.534116059 container remove 05fdabed716e35df2f8ffc4a9ab98d0571bb28422988baebe1a49a627aec1836 (image=quay.io/ceph/keepalived:2.2.4, name=elated_bartik, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.28.2, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, name=keepalived, release=1793, architecture=x86_64, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  8 05:47:28 np0005475493 systemd[1]: libpod-conmon-05fdabed716e35df2f8ffc4a9ab98d0571bb28422988baebe1a49a627aec1836.scope: Deactivated successfully.
Oct  8 05:47:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:47:28 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:28 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event b9fe5884-05d2-4569-bb9a-538e8e55db00 (Global Recovery Event) in 10 seconds
Oct  8 05:47:28 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.a scrub starts
Oct  8 05:47:28 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.a scrub ok
Oct  8 05:47:28 np0005475493 systemd[1]: Reloading.
Oct  8 05:47:28 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:47:28 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:47:28 np0005475493 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  8 05:47:28 np0005475493 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  8 05:47:28 np0005475493 ceph-mon[73572]: Deploying daemon keepalived.rgw.default.compute-0.bphuep on compute-0
Oct  8 05:47:28 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  8 05:47:28 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:28 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f80018b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:28.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:28 np0005475493 systemd[1]: Reloading.
Oct  8 05:47:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:29.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:29 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:47:29 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:47:29 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v55: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:47:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Oct  8 05:47:29 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  8 05:47:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:29 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:29 np0005475493 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.bphuep for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:47:29 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Oct  8 05:47:29 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Oct  8 05:47:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:29 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:29 np0005475493 podman[98809]: 2025-10-08 09:47:29.540705561 +0000 UTC m=+0.021203061 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct  8 05:47:29 np0005475493 podman[98809]: 2025-10-08 09:47:29.641320174 +0000 UTC m=+0.121817644 container create ad8a1348c81c698896af7b0b783bb40335d664a7a01f20c114dc63c98a072845 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, vendor=Red Hat, Inc., release=1793, io.buildah.version=1.28.2, name=keepalived, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, version=2.2.4, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64)
Oct  8 05:47:29 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b80a31699ad10f5fa3ddd7e83a9f7352a11464bb70509c10da2b40cb8d83f11c/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:29 np0005475493 podman[98809]: 2025-10-08 09:47:29.812789082 +0000 UTC m=+0.293286583 container init ad8a1348c81c698896af7b0b783bb40335d664a7a01f20c114dc63c98a072845 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, architecture=x86_64, build-date=2023-02-22T09:23:20, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1793, name=keepalived, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=)
Oct  8 05:47:29 np0005475493 podman[98809]: 2025-10-08 09:47:29.817965187 +0000 UTC m=+0.298462657 container start ad8a1348c81c698896af7b0b783bb40335d664a7a01f20c114dc63c98a072845 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, version=2.2.4, vcs-type=git, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  8 05:47:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep[98824]: Wed Oct  8 09:47:29 2025: Starting Keepalived v2.2.4 (08/21,2021)
Oct  8 05:47:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep[98824]: Wed Oct  8 09:47:29 2025: Running on Linux 5.14.0-620.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025 (built for Linux 5.14.0)
Oct  8 05:47:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep[98824]: Wed Oct  8 09:47:29 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Oct  8 05:47:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep[98824]: Wed Oct  8 09:47:29 2025: Configuration file /etc/keepalived/keepalived.conf
Oct  8 05:47:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep[98824]: Wed Oct  8 09:47:29 2025: Failed to bind to process monitoring socket - errno 98 - Address already in use
Oct  8 05:47:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep[98824]: Wed Oct  8 09:47:29 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct  8 05:47:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep[98824]: Wed Oct  8 09:47:29 2025: Starting VRRP child process, pid=4
Oct  8 05:47:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep[98824]: Wed Oct  8 09:47:29 2025: Startup complete
Oct  8 05:47:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:29 2025: (VI_0) Entering BACKUP STATE
Oct  8 05:47:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep[98824]: Wed Oct  8 09:47:29 2025: (VI_0) Entering BACKUP STATE (init)
Oct  8 05:47:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep[98824]: Wed Oct  8 09:47:29 2025: VRRP_Script(check_backend) succeeded
Oct  8 05:47:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Oct  8 05:47:30 np0005475493 bash[98809]: ad8a1348c81c698896af7b0b783bb40335d664a7a01f20c114dc63c98a072845
Oct  8 05:47:30 np0005475493 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.bphuep for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:47:30 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  8 05:47:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Oct  8 05:47:30 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Oct  8 05:47:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.3( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.025028229s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 190.314880371s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.3( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.024970055s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.314880371s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.b( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.025154114s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 190.315155029s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.b( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.025109291s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.315155029s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.17( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=3 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.024371147s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 190.314498901s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.f( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.025084496s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 190.315231323s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.f( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.025041580s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.315231323s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.17( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=3 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.024296761s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.314498901s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.7( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.026891708s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 190.317626953s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.7( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.026872635s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.317626953s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.1b( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.026808739s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 190.317749023s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.1b( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.026784897s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.317749023s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.1f( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.026291847s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 190.318008423s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.13( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.026341438s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 190.318145752s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.13( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.026314735s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.318145752s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 62 pg[9.1f( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=62 pruub=12.026110649s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.318008423s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:30 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  8 05:47:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:47:30 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:47:30 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct  8 05:47:30 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:30 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  8 05:47:30 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  8 05:47:30 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  8 05:47:30 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  8 05:47:30 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.jvgfkf on compute-2
Oct  8 05:47:30 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.jvgfkf on compute-2
Oct  8 05:47:30 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Oct  8 05:47:30 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Oct  8 05:47:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw[96948]: Wed Oct  8 09:47:30 2025: (VI_0) Entering MASTER STATE
Oct  8 05:47:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:30 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66000016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:47:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:30.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:47:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:47:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:31.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:47:31 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v57: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 324 B/s, 0 keys/s, 2 objects/s recovering
Oct  8 05:47:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Oct  8 05:47:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  8 05:47:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Oct  8 05:47:31 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  8 05:47:31 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:31 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:31 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:31 np0005475493 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  8 05:47:31 np0005475493 ceph-mon[73572]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  8 05:47:31 np0005475493 ceph-mon[73572]: Deploying daemon keepalived.rgw.default.compute-2.jvgfkf on compute-2
Oct  8 05:47:31 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  8 05:47:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:31 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f80018b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  8 05:47:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Oct  8 05:47:31 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Oct  8 05:47:31 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.1f( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:31 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.f( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:31 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.1f( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:31 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.1b( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:31 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.13( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:31 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.f( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:31 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.1b( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:31 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.13( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:31 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.b( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:31 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.3( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:31 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.b( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:31 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.3( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:31 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.17( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=3 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:31 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.17( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=3 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:31 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.7( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:31 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 63 pg[9.7( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:31 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Oct  8 05:47:31 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Oct  8 05:47:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:31 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Oct  8 05:47:32 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  8 05:47:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:47:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Oct  8 05:47:32 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Oct  8 05:47:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:32 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 64 pg[9.7( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:32 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 64 pg[9.b( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:32 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 64 pg[9.1b( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:32 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 64 pg[9.17( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=3 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:32 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 64 pg[9.3( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:32 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 64 pg[9.1f( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:47:32 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 64 pg[9.f( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:32 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 64 pg[9.13( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct  8 05:47:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:32 np0005475493 ceph-mgr[73869]: [progress INFO root] complete: finished ev eb90faac-447e-4af6-82aa-528626b39460 (Updating ingress.rgw.default deployment (+4 -> 4))
Oct  8 05:47:32 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event eb90faac-447e-4af6-82aa-528626b39460 (Updating ingress.rgw.default deployment (+4 -> 4)) in 9 seconds
Oct  8 05:47:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct  8 05:47:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:32 np0005475493 ceph-mgr[73869]: [progress INFO root] update: starting ev 0bf79cd8-eb11-4f4f-80b2-14468a3c828d (Updating prometheus deployment (+1 -> 1))
Oct  8 05:47:32 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Oct  8 05:47:32 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Oct  8 05:47:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:32 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:32.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:33.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:33 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v60: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 333 B/s, 0 keys/s, 3 objects/s recovering
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  8 05:47:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:33 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: Deploying daemon prometheus.compute-0 on compute-0
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.15( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=65 pruub=8.873750687s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 190.311401367s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.15( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=65 pruub=8.873694420s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.311401367s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.17( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=3 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.941551208s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 196.379364014s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.3( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=6 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.941587448s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 196.379425049s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.17( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=3 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.941474915s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 196.379364014s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.3( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=6 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.941536903s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 196.379425049s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.b( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=6 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.940853119s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 196.379257202s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.f( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=6 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.944669724s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 196.383163452s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.b( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=6 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.940762520s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 196.379257202s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.f( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=6 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.944624901s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 196.383163452s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.d( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=65 pruub=8.876586914s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 190.315261841s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.d( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=65 pruub=8.876568794s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.315261841s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.7( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=6 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.940307617s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 196.379257202s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.7( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=6 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.940267563s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 196.379257202s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.1b( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=5 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.940160751s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 196.379333496s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.5( v 58'1021 (0'0,58'1021] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=65 pruub=8.878764153s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 58'1020 mlcod 58'1020 active pruub 190.317901611s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.1b( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=5 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.940132141s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 196.379333496s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.5( v 58'1021 (0'0,58'1021] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=65 pruub=8.878691673s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 58'1020 mlcod 0'0 unknown NOTIFY pruub 190.317901611s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.1f( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=5 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.939930916s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 196.379470825s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.1d( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=65 pruub=8.878444672s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 190.318038940s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.1f( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=5 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.939888000s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 196.379470825s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.1d( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=65 pruub=8.878426552s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.318038940s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.13( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=5 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.943156242s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 196.383178711s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 65 pg[9.13( v 45'1018 (0'0,45'1018] local-lis/les=63/64 n=5 ec=54/38 lis/c=63/54 les/c/f=64/56/0 sis=65 pruub=14.943113327s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 196.383178711s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:33.389910) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916853390021, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7355, "num_deletes": 261, "total_data_size": 13706102, "memory_usage": 13998832, "flush_reason": "Manual Compaction"}
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Oct  8 05:47:33 np0005475493 ceph-mgr[73869]: [progress INFO root] Writing back 25 completed events
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Oct  8 05:47:33 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916853452211, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 12234194, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 143, "largest_seqno": 7493, "table_properties": {"data_size": 12207538, "index_size": 16941, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8645, "raw_key_size": 83309, "raw_average_key_size": 24, "raw_value_size": 12141638, "raw_average_value_size": 3529, "num_data_blocks": 745, "num_entries": 3440, "num_filter_entries": 3440, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916583, "oldest_key_time": 1759916583, "file_creation_time": 1759916853, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 62367 microseconds, and 23865 cpu microseconds.
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:33.452277) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 12234194 bytes OK
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:33.452301) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:33.454013) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:33.454056) EVENT_LOG_v1 {"time_micros": 1759916853454027, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:33.454079) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13672968, prev total WAL file size 13684429, number of live WAL files 2.
Oct  8 05:47:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-rgw-default-compute-0-bphuep[98824]: Wed Oct  8 09:47:33 2025: (VI_0) Entering MASTER STATE
Oct  8 05:47:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:33 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:33.460106) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323631' seq:0, type:0; will stop at (end)
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(11MB) 13(58KB) 8(1944B)]
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916853460194, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 12295765, "oldest_snapshot_seqno": -1}
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3253 keys, 12277890 bytes, temperature: kUnknown
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916853517308, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 12277890, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12251433, "index_size": 17195, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8197, "raw_key_size": 82126, "raw_average_key_size": 25, "raw_value_size": 12187076, "raw_average_value_size": 3746, "num_data_blocks": 756, "num_entries": 3253, "num_filter_entries": 3253, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759916853, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:33.517689) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 12277890 bytes
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:33.519259) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 214.6 rd, 214.3 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(11.7, 0.0 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3550, records dropped: 297 output_compression: NoCompression
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:33.519290) EVENT_LOG_v1 {"time_micros": 1759916853519277, "job": 4, "event": "compaction_finished", "compaction_time_micros": 57296, "compaction_time_cpu_micros": 23922, "output_level": 6, "num_output_files": 1, "total_output_size": 12277890, "num_input_records": 3550, "num_output_records": 3253, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916853522051, "job": 4, "event": "table_file_deletion", "file_number": 19}
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916853522124, "job": 4, "event": "table_file_deletion", "file_number": 13}
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916853522164, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct  8 05:47:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:33.459994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 05:47:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Oct  8 05:47:34 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.c deep-scrub starts
Oct  8 05:47:34 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.c deep-scrub ok
Oct  8 05:47:34 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  8 05:47:34 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Oct  8 05:47:34 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Oct  8 05:47:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 66 pg[9.1d( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=66) [2]/[1] r=0 lpr=66 pi=[54,66)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 66 pg[9.5( v 58'1021 (0'0,58'1021] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=66) [2]/[1] r=0 lpr=66 pi=[54,66)/1 crt=45'1018 lcod 58'1020 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 66 pg[9.d( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=66) [2]/[1] r=0 lpr=66 pi=[54,66)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 66 pg[9.1d( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=66) [2]/[1] r=0 lpr=66 pi=[54,66)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 66 pg[9.d( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=66) [2]/[1] r=0 lpr=66 pi=[54,66)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 66 pg[9.5( v 58'1021 (0'0,58'1021] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=66) [2]/[1] r=0 lpr=66 pi=[54,66)/1 crt=45'1018 lcod 58'1020 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 66 pg[9.15( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=66) [2]/[1] r=0 lpr=66 pi=[54,66)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 66 pg[9.15( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=66) [2]/[1] r=0 lpr=66 pi=[54,66)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:34 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:34.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:35.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:35 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v63: 353 pgs: 4 unknown, 8 peering, 341 active+clean; 455 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 298 B/s, 8 objects/s recovering
Oct  8 05:47:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:35 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:35 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Oct  8 05:47:35 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Oct  8 05:47:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:35 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Oct  8 05:47:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Oct  8 05:47:35 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Oct  8 05:47:35 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 67 pg[9.5( v 58'1021 (0'0,58'1021] local-lis/les=66/67 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[54,66)/1 crt=58'1021 lcod 58'1020 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:35 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 67 pg[9.1d( v 45'1018 (0'0,45'1018] local-lis/les=66/67 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[54,66)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:35 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 67 pg[9.d( v 45'1018 (0'0,45'1018] local-lis/les=66/67 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[54,66)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:35 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 67 pg[9.15( v 45'1018 (0'0,45'1018] local-lis/les=66/67 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[54,66)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:36 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.d scrub starts
Oct  8 05:47:36 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.d scrub ok
Oct  8 05:47:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:36 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:36.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Oct  8 05:47:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:37.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:37 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v65: 353 pgs: 4 unknown, 8 peering, 341 active+clean; 455 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 242 B/s, 7 objects/s recovering
Oct  8 05:47:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:37 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:37 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618000fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:37 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.e scrub starts
Oct  8 05:47:37 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.e scrub ok
Oct  8 05:47:37 np0005475493 podman[98928]: 2025-10-08 09:47:37.941674318 +0000 UTC m=+4.769362345 volume create d9c3f3155264a057bad85aacca6dba5a24da2b46751f524b7f0f66122813512f
Oct  8 05:47:37 np0005475493 podman[98928]: 2025-10-08 09:47:37.958631223 +0000 UTC m=+4.786319250 container create 9b218eb69e3f3397c5284a63453576f35e4fbb04d0228937215154c1654c8bf9 (image=quay.io/prometheus/prometheus:v2.51.0, name=musing_leakey, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:38 np0005475493 podman[98928]: 2025-10-08 09:47:37.902708019 +0000 UTC m=+4.730396076 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Oct  8 05:47:38 np0005475493 systemd[1]: Started libpod-conmon-9b218eb69e3f3397c5284a63453576f35e4fbb04d0228937215154c1654c8bf9.scope.
Oct  8 05:47:38 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:47:38 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/483be5c6a46c1953c2021770e099d5d487a3cc7c6eaed1a3a6e18f212d999fc8/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:38 np0005475493 podman[98928]: 2025-10-08 09:47:38.075945883 +0000 UTC m=+4.903634000 container init 9b218eb69e3f3397c5284a63453576f35e4fbb04d0228937215154c1654c8bf9 (image=quay.io/prometheus/prometheus:v2.51.0, name=musing_leakey, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:38 np0005475493 podman[98928]: 2025-10-08 09:47:38.085229016 +0000 UTC m=+4.912917063 container start 9b218eb69e3f3397c5284a63453576f35e4fbb04d0228937215154c1654c8bf9 (image=quay.io/prometheus/prometheus:v2.51.0, name=musing_leakey, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:38 np0005475493 musing_leakey[99194]: 65534 65534
Oct  8 05:47:38 np0005475493 systemd[1]: libpod-9b218eb69e3f3397c5284a63453576f35e4fbb04d0228937215154c1654c8bf9.scope: Deactivated successfully.
Oct  8 05:47:38 np0005475493 podman[98928]: 2025-10-08 09:47:38.091728711 +0000 UTC m=+4.919416758 container attach 9b218eb69e3f3397c5284a63453576f35e4fbb04d0228937215154c1654c8bf9 (image=quay.io/prometheus/prometheus:v2.51.0, name=musing_leakey, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:38 np0005475493 podman[98928]: 2025-10-08 09:47:38.092844606 +0000 UTC m=+4.920532663 container died 9b218eb69e3f3397c5284a63453576f35e4fbb04d0228937215154c1654c8bf9 (image=quay.io/prometheus/prometheus:v2.51.0, name=musing_leakey, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:38 np0005475493 systemd[1]: var-lib-containers-storage-overlay-483be5c6a46c1953c2021770e099d5d487a3cc7c6eaed1a3a6e18f212d999fc8-merged.mount: Deactivated successfully.
Oct  8 05:47:38 np0005475493 podman[98928]: 2025-10-08 09:47:38.227659919 +0000 UTC m=+5.055347976 container remove 9b218eb69e3f3397c5284a63453576f35e4fbb04d0228937215154c1654c8bf9 (image=quay.io/prometheus/prometheus:v2.51.0, name=musing_leakey, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:38 np0005475493 podman[98928]: 2025-10-08 09:47:38.238640535 +0000 UTC m=+5.066328602 volume remove d9c3f3155264a057bad85aacca6dba5a24da2b46751f524b7f0f66122813512f
Oct  8 05:47:38 np0005475493 systemd[1]: libpod-conmon-9b218eb69e3f3397c5284a63453576f35e4fbb04d0228937215154c1654c8bf9.scope: Deactivated successfully.
Oct  8 05:47:38 np0005475493 podman[99212]: 2025-10-08 09:47:38.317938896 +0000 UTC m=+0.048079147 volume create 5e2e5af56f30451cf6c0d29d44cedcf6e4f8d101525902a32a9eae828ffd2aaa
Oct  8 05:47:38 np0005475493 podman[99212]: 2025-10-08 09:47:38.334920392 +0000 UTC m=+0.065060683 container create 7dff5171b4bed265fca9dbe14bc07abd33025c287672fc3ad1fee0d8fe3ea412 (image=quay.io/prometheus/prometheus:v2.51.0, name=goofy_jepsen, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:38 np0005475493 podman[99212]: 2025-10-08 09:47:38.292919307 +0000 UTC m=+0.023059578 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Oct  8 05:47:38 np0005475493 systemd[1]: Started libpod-conmon-7dff5171b4bed265fca9dbe14bc07abd33025c287672fc3ad1fee0d8fe3ea412.scope.
Oct  8 05:47:38 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:47:38 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d06565b100d91f421383b86c306b258d55e5cc76b05a1cec2b0421cb1f9a2601/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:38 np0005475493 podman[99212]: 2025-10-08 09:47:38.44548873 +0000 UTC m=+0.175629081 container init 7dff5171b4bed265fca9dbe14bc07abd33025c287672fc3ad1fee0d8fe3ea412 (image=quay.io/prometheus/prometheus:v2.51.0, name=goofy_jepsen, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:38 np0005475493 podman[99212]: 2025-10-08 09:47:38.450787337 +0000 UTC m=+0.180927628 container start 7dff5171b4bed265fca9dbe14bc07abd33025c287672fc3ad1fee0d8fe3ea412 (image=quay.io/prometheus/prometheus:v2.51.0, name=goofy_jepsen, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:38 np0005475493 goofy_jepsen[99228]: 65534 65534
Oct  8 05:47:38 np0005475493 systemd[1]: libpod-7dff5171b4bed265fca9dbe14bc07abd33025c287672fc3ad1fee0d8fe3ea412.scope: Deactivated successfully.
Oct  8 05:47:38 np0005475493 podman[99212]: 2025-10-08 09:47:38.459356177 +0000 UTC m=+0.189496458 container attach 7dff5171b4bed265fca9dbe14bc07abd33025c287672fc3ad1fee0d8fe3ea412 (image=quay.io/prometheus/prometheus:v2.51.0, name=goofy_jepsen, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:38 np0005475493 podman[99212]: 2025-10-08 09:47:38.459856903 +0000 UTC m=+0.189997204 container died 7dff5171b4bed265fca9dbe14bc07abd33025c287672fc3ad1fee0d8fe3ea412 (image=quay.io/prometheus/prometheus:v2.51.0, name=goofy_jepsen, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:38 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.b deep-scrub starts
Oct  8 05:47:38 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.b deep-scrub ok
Oct  8 05:47:38 np0005475493 ceph-mgr[73869]: [progress WARNING root] Starting Global Recovery Event,12 pgs not in active + clean state
Oct  8 05:47:38 np0005475493 systemd[1]: var-lib-containers-storage-overlay-d06565b100d91f421383b86c306b258d55e5cc76b05a1cec2b0421cb1f9a2601-merged.mount: Deactivated successfully.
Oct  8 05:47:38 np0005475493 podman[99212]: 2025-10-08 09:47:38.612692434 +0000 UTC m=+0.342832685 container remove 7dff5171b4bed265fca9dbe14bc07abd33025c287672fc3ad1fee0d8fe3ea412 (image=quay.io/prometheus/prometheus:v2.51.0, name=goofy_jepsen, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:38 np0005475493 podman[99212]: 2025-10-08 09:47:38.647273725 +0000 UTC m=+0.377413976 volume remove 5e2e5af56f30451cf6c0d29d44cedcf6e4f8d101525902a32a9eae828ffd2aaa
Oct  8 05:47:38 np0005475493 systemd[1]: libpod-conmon-7dff5171b4bed265fca9dbe14bc07abd33025c287672fc3ad1fee0d8fe3ea412.scope: Deactivated successfully.
Oct  8 05:47:38 np0005475493 systemd[1]: Reloading.
Oct  8 05:47:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:38 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:38 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:47:38 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:47:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:47:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:38.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:47:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:39.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:39 np0005475493 systemd[1]: Reloading.
Oct  8 05:47:39 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:47:39 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:47:39 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v66: 353 pgs: 4 unknown, 8 peering, 341 active+clean; 455 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 198 B/s, 5 objects/s recovering
Oct  8 05:47:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:39 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:39 np0005475493 systemd[1]: packagekit.service: Deactivated successfully.
Oct  8 05:47:39 np0005475493 systemd[1]: Starting Ceph prometheus.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:47:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:39 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:39 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Oct  8 05:47:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Oct  8 05:47:39 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Oct  8 05:47:39 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Oct  8 05:47:39 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 68 pg[9.15( v 45'1018 (0'0,45'1018] local-lis/les=66/67 n=4 ec=54/38 lis/c=66/54 les/c/f=67/56/0 sis=68 pruub=12.058046341s) [2] async=[2] r=-1 lpr=68 pi=[54,68)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 199.701385498s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:39 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 68 pg[9.1d( v 45'1018 (0'0,45'1018] local-lis/les=66/67 n=5 ec=54/38 lis/c=66/54 les/c/f=67/56/0 sis=68 pruub=12.057833672s) [2] async=[2] r=-1 lpr=68 pi=[54,68)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 199.701171875s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:39 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 68 pg[9.d( v 45'1018 (0'0,45'1018] local-lis/les=66/67 n=6 ec=54/38 lis/c=66/54 les/c/f=67/56/0 sis=68 pruub=12.057991982s) [2] async=[2] r=-1 lpr=68 pi=[54,68)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 199.701385498s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:39 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 68 pg[9.15( v 45'1018 (0'0,45'1018] local-lis/les=66/67 n=4 ec=54/38 lis/c=66/54 les/c/f=67/56/0 sis=68 pruub=12.057976723s) [2] r=-1 lpr=68 pi=[54,68)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.701385498s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:39 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 68 pg[9.1d( v 45'1018 (0'0,45'1018] local-lis/les=66/67 n=5 ec=54/38 lis/c=66/54 les/c/f=67/56/0 sis=68 pruub=12.057407379s) [2] r=-1 lpr=68 pi=[54,68)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.701171875s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:39 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 68 pg[9.5( v 67'1027 (0'0,67'1027] local-lis/les=66/67 n=6 ec=54/38 lis/c=66/54 les/c/f=67/56/0 sis=68 pruub=12.026637077s) [2] async=[2] r=-1 lpr=68 pi=[54,68)/1 crt=67'1024 lcod 67'1026 mlcod 67'1026 active pruub 199.670516968s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:39 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 68 pg[9.5( v 67'1027 (0'0,67'1027] local-lis/les=66/67 n=6 ec=54/38 lis/c=66/54 les/c/f=67/56/0 sis=68 pruub=12.026576042s) [2] r=-1 lpr=68 pi=[54,68)/1 crt=67'1024 lcod 67'1026 mlcod 0'0 unknown NOTIFY pruub 199.670516968s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:39 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 68 pg[9.d( v 45'1018 (0'0,45'1018] local-lis/les=66/67 n=6 ec=54/38 lis/c=66/54 les/c/f=67/56/0 sis=68 pruub=12.057183266s) [2] r=-1 lpr=68 pi=[54,68)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.701385498s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:39 np0005475493 podman[99374]: 2025-10-08 09:47:39.655594301 +0000 UTC m=+0.067294874 container create 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:39 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd1fadf28913cbc0057245ad8febd4d04a90075db3637e26764bf6babfd02e1/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:39 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd1fadf28913cbc0057245ad8febd4d04a90075db3637e26764bf6babfd02e1/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:39 np0005475493 podman[99374]: 2025-10-08 09:47:39.609312551 +0000 UTC m=+0.021013104 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Oct  8 05:47:39 np0005475493 podman[99374]: 2025-10-08 09:47:39.745126035 +0000 UTC m=+0.156826588 container init 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:39 np0005475493 podman[99374]: 2025-10-08 09:47:39.754337755 +0000 UTC m=+0.166038288 container start 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:39 np0005475493 bash[99374]: 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c
Oct  8 05:47:39 np0005475493 systemd[1]: Started Ceph prometheus.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:47:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.786Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Oct  8 05:47:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.786Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Oct  8 05:47:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.786Z caller=main.go:623 level=info host_details="(Linux 5.14.0-620.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025 x86_64 compute-0 (none))"
Oct  8 05:47:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.786Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Oct  8 05:47:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.786Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Oct  8 05:47:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.788Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Oct  8 05:47:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.789Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Oct  8 05:47:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.793Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Oct  8 05:47:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.793Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Oct  8 05:47:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.795Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Oct  8 05:47:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.795Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.18µs
Oct  8 05:47:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.795Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Oct  8 05:47:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.797Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Oct  8 05:47:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.797Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=265.717µs wal_replay_duration=1.500438ms wbl_replay_duration=280ns total_replay_duration=1.796176ms
Oct  8 05:47:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.801Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Oct  8 05:47:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.802Z caller=main.go:1153 level=info msg="TSDB started"
Oct  8 05:47:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.802Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Oct  8 05:47:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.841Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=39.00772ms db_storage=1.88µs remote_storage=2.58µs web_handler=1.02µs query_engine=1.75µs scrape=5.201734ms scrape_sd=235.288µs notify=17.8µs notify_sd=17.011µs rules=31.974138ms tracing=10.92µs
Oct  8 05:47:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.841Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Oct  8 05:47:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0[99390]: ts=2025-10-08T09:47:39.841Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Oct  8 05:47:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:47:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:47:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Oct  8 05:47:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:39 np0005475493 ceph-mgr[73869]: [progress INFO root] complete: finished ev 0bf79cd8-eb11-4f4f-80b2-14468a3c828d (Updating prometheus deployment (+1 -> 1))
Oct  8 05:47:39 np0005475493 ceph-mgr[73869]: [progress INFO root] Completed event 0bf79cd8-eb11-4f4f-80b2-14468a3c828d (Updating prometheus deployment (+1 -> 1)) in 8 seconds
Oct  8 05:47:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Oct  8 05:47:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Oct  8 05:47:40 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Oct  8 05:47:40 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Oct  8 05:47:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Oct  8 05:47:40 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:40 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:40 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:40 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Oct  8 05:47:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Oct  8 05:47:40 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Oct  8 05:47:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:40 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:40.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Oct  8 05:47:40 np0005475493 ceph-mgr[73869]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  8 05:47:40 np0005475493 ceph-mgr[73869]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct  8 05:47:40 np0005475493 ceph-mgr[73869]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct  8 05:47:40 np0005475493 ceph-mgr[73869]: mgr respawn  1: '-n'
Oct  8 05:47:40 np0005475493 ceph-mgr[73869]: mgr respawn  2: 'mgr.compute-0.ixicfj'
Oct  8 05:47:40 np0005475493 ceph-mgr[73869]: mgr respawn  3: '-f'
Oct  8 05:47:40 np0005475493 ceph-mgr[73869]: mgr respawn  4: '--setuser'
Oct  8 05:47:40 np0005475493 ceph-mgr[73869]: mgr respawn  5: 'ceph'
Oct  8 05:47:40 np0005475493 ceph-mgr[73869]: mgr respawn  6: '--setgroup'
Oct  8 05:47:40 np0005475493 ceph-mgr[73869]: mgr respawn  7: 'ceph'
Oct  8 05:47:40 np0005475493 ceph-mgr[73869]: mgr respawn  8: '--default-log-to-file=false'
Oct  8 05:47:40 np0005475493 ceph-mgr[73869]: mgr respawn  9: '--default-log-to-journald=true'
Oct  8 05:47:40 np0005475493 ceph-mgr[73869]: mgr respawn  10: '--default-log-to-stderr=false'
Oct  8 05:47:40 np0005475493 ceph-mgr[73869]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct  8 05:47:40 np0005475493 ceph-mgr[73869]: mgr respawn  exe_path /proc/self/exe
Oct  8 05:47:40 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.ixicfj(active, since 87s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct  8 05:47:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:41.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:41 np0005475493 systemd[1]: session-36.scope: Deactivated successfully.
Oct  8 05:47:41 np0005475493 systemd[1]: session-36.scope: Consumed 45.137s CPU time.
Oct  8 05:47:41 np0005475493 systemd-logind[798]: Session 36 logged out. Waiting for processes to exit.
Oct  8 05:47:41 np0005475493 systemd-logind[798]: Removed session 36.
Oct  8 05:47:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ignoring --setuser ceph since I am not root
Oct  8 05:47:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ignoring --setgroup ceph since I am not root
Oct  8 05:47:41 np0005475493 ceph-mgr[73869]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  8 05:47:41 np0005475493 ceph-mgr[73869]: pidfile_write: ignore empty --pid-file
Oct  8 05:47:41 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'alerts'
Oct  8 05:47:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:41.213+0000 7fa16c208140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  8 05:47:41 np0005475493 ceph-mgr[73869]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  8 05:47:41 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'balancer'
Oct  8 05:47:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:41 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618001aa0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:41.288+0000 7fa16c208140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  8 05:47:41 np0005475493 ceph-mgr[73869]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  8 05:47:41 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'cephadm'
Oct  8 05:47:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:41 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618001aa0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:41 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Oct  8 05:47:41 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Oct  8 05:47:41 np0005475493 ceph-mon[73572]: from='mgr.14475 192.168.122.100:0/42411428' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Oct  8 05:47:42 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'crash'
Oct  8 05:47:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:42.117+0000 7fa16c208140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  8 05:47:42 np0005475493 ceph-mgr[73869]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  8 05:47:42 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'dashboard'
Oct  8 05:47:42 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Oct  8 05:47:42 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Oct  8 05:47:42 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'devicehealth'
Oct  8 05:47:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:42.791+0000 7fa16c208140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  8 05:47:42 np0005475493 ceph-mgr[73869]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  8 05:47:42 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'diskprediction_local'
Oct  8 05:47:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:42 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:42.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  8 05:47:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  8 05:47:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]:  from numpy import show_config as show_numpy_config
Oct  8 05:47:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:42.957+0000 7fa16c208140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  8 05:47:42 np0005475493 ceph-mgr[73869]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  8 05:47:42 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'influx'
Oct  8 05:47:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:43.033+0000 7fa16c208140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  8 05:47:43 np0005475493 ceph-mgr[73869]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  8 05:47:43 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'insights'
Oct  8 05:47:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:43.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:43 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'iostat'
Oct  8 05:47:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:43.183+0000 7fa16c208140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  8 05:47:43 np0005475493 ceph-mgr[73869]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  8 05:47:43 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'k8sevents'
Oct  8 05:47:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:43 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:47:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:43 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618001aa0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:43 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Oct  8 05:47:43 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Oct  8 05:47:43 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'localpool'
Oct  8 05:47:43 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'mds_autoscaler'
Oct  8 05:47:43 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'mirroring'
Oct  8 05:47:43 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'nfs'
Oct  8 05:47:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:44.212+0000 7fa16c208140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  8 05:47:44 np0005475493 ceph-mgr[73869]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  8 05:47:44 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'orchestrator'
Oct  8 05:47:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:44.439+0000 7fa16c208140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  8 05:47:44 np0005475493 ceph-mgr[73869]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  8 05:47:44 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'osd_perf_query'
Oct  8 05:47:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:44.522+0000 7fa16c208140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  8 05:47:44 np0005475493 ceph-mgr[73869]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  8 05:47:44 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'osd_support'
Oct  8 05:47:44 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Oct  8 05:47:44 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Oct  8 05:47:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:44.602+0000 7fa16c208140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  8 05:47:44 np0005475493 ceph-mgr[73869]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  8 05:47:44 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'pg_autoscaler'
Oct  8 05:47:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:44.692+0000 7fa16c208140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  8 05:47:44 np0005475493 ceph-mgr[73869]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  8 05:47:44 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'progress'
Oct  8 05:47:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:44.764+0000 7fa16c208140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  8 05:47:44 np0005475493 ceph-mgr[73869]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  8 05:47:44 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'prometheus'
Oct  8 05:47:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618001aa0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:44.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:45.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:45.099+0000 7fa16c208140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  8 05:47:45 np0005475493 ceph-mgr[73869]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  8 05:47:45 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'rbd_support'
Oct  8 05:47:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:45.205+0000 7fa16c208140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  8 05:47:45 np0005475493 ceph-mgr[73869]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  8 05:47:45 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'restful'
Oct  8 05:47:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:45 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'rgw'
Oct  8 05:47:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:45 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Oct  8 05:47:45 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Oct  8 05:47:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:45.640+0000 7fa16c208140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  8 05:47:45 np0005475493 ceph-mgr[73869]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  8 05:47:45 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'rook'
Oct  8 05:47:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:46.199+0000 7fa16c208140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  8 05:47:46 np0005475493 ceph-mgr[73869]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  8 05:47:46 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'selftest'
Oct  8 05:47:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:46.267+0000 7fa16c208140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  8 05:47:46 np0005475493 ceph-mgr[73869]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  8 05:47:46 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'snap_schedule'
Oct  8 05:47:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:46.353+0000 7fa16c208140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  8 05:47:46 np0005475493 ceph-mgr[73869]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  8 05:47:46 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'stats'
Oct  8 05:47:46 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'status'
Oct  8 05:47:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:46.523+0000 7fa16c208140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  8 05:47:46 np0005475493 ceph-mgr[73869]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  8 05:47:46 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'telegraf'
Oct  8 05:47:46 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Oct  8 05:47:46 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Oct  8 05:47:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:46.598+0000 7fa16c208140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  8 05:47:46 np0005475493 ceph-mgr[73869]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  8 05:47:46 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'telemetry'
Oct  8 05:47:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:46.765+0000 7fa16c208140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  8 05:47:46 np0005475493 ceph-mgr[73869]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  8 05:47:46 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'test_orchestrator'
Oct  8 05:47:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:46 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618001aa0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000031s ======
Oct  8 05:47:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:46.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct  8 05:47:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:46.998+0000 7fa16c208140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  8 05:47:46 np0005475493 ceph-mgr[73869]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  8 05:47:46 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'volumes'
Oct  8 05:47:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:47.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:47 np0005475493 systemd-logind[798]: New session 38 of user zuul.
Oct  8 05:47:47 np0005475493 systemd[1]: Started Session 38 of User zuul.
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.swlvov restarted
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.swlvov started
Oct  8 05:47:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:47 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618001aa0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:47.271+0000 7fa16c208140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: mgr[py] Loading python module 'zabbix'
Oct  8 05:47:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:47.345+0000 7fa16c208140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ixicfj restarted
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ixicfj
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: ms_deliver_dispatch: unhandled message 0x55d6aa6db860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct  8 05:47:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:47 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.ixicfj(active, starting, since 0.161204s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: mgr handle_mgr_map Activating!
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: mgr handle_mgr_map I am now activating
Oct  8 05:47:47 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.wfaozr"} v 0)
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.wfaozr"}]: dispatch
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e9 all = 0
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.lphril"} v 0)
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.lphril"}]: dispatch
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e9 all = 0
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.bumazt"} v 0)
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.bumazt"}]: dispatch
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e9 all = 0
Oct  8 05:47:47 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ixicfj", "id": "compute-0.ixicfj"} v 0)
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ixicfj", "id": "compute-0.ixicfj"}]: dispatch
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.mtagwx", "id": "compute-2.mtagwx"} v 0)
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.mtagwx", "id": "compute-2.mtagwx"}]: dispatch
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.swlvov", "id": "compute-1.swlvov"} v 0)
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.swlvov", "id": "compute-1.swlvov"}]: dispatch
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).mds e9 all = 1
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: balancer
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Starting
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Manager daemon compute-0.ixicfj is now available
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:47:47
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: cephadm
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: crash
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: dashboard
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO access_control] Loading user roles DB version=2
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO sso] Loading SSO DB version=1
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: devicehealth
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: iostat
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [devicehealth INFO root] Starting
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: nfs
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: orchestrator
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: pg_autoscaler
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: progress
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [progress INFO root] Loading...
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fa0ebd25460>, <progress.module.GhostEvent object at 0x7fa0ebd25490>, <progress.module.GhostEvent object at 0x7fa0ebd25b80>, <progress.module.GhostEvent object at 0x7fa0ebd25be0>, <progress.module.GhostEvent object at 0x7fa0ebd25c70>, <progress.module.GhostEvent object at 0x7fa0ebd25ca0>, <progress.module.GhostEvent object at 0x7fa0ebd25c40>, <progress.module.GhostEvent object at 0x7fa0ebd25cd0>, <progress.module.GhostEvent object at 0x7fa0ebd25d30>, <progress.module.GhostEvent object at 0x7fa0ebd25d90>, <progress.module.GhostEvent object at 0x7fa0ebd25dc0>, <progress.module.GhostEvent object at 0x7fa0ebd25df0>, <progress.module.GhostEvent object at 0x7fa0ebd25e20>, <progress.module.GhostEvent object at 0x7fa0ebd25ee0>, <progress.module.GhostEvent object at 0x7fa0ebd25e50>, <progress.module.GhostEvent object at 0x7fa0ebd25e80>, <progress.module.GhostEvent object at 0x7fa0ebd25d60>, <progress.module.GhostEvent object at 0x7fa0ebd25d00>, <progress.module.GhostEvent object at 0x7fa0ebd25eb0>, <progress.module.GhostEvent object at 0x7fa0ebd25f10>, <progress.module.GhostEvent object at 0x7fa0ebd25f40>, <progress.module.GhostEvent object at 0x7fa0ebd25f70>, <progress.module.GhostEvent object at 0x7fa0ebd25fa0>, <progress.module.GhostEvent object at 0x7fa0ebd25fd0>, <progress.module.GhostEvent object at 0x7fa0ebd32040>] historic events
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [progress INFO root] Loaded OSDMap, ready.
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.mtagwx restarted
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.mtagwx started
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.789231) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916867789333, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 524, "num_deletes": 251, "total_data_size": 944358, "memory_usage": 955552, "flush_reason": "Manual Compaction"}
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: prometheus
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [prometheus INFO root] server_addr: :: server_port: 9283
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [prometheus INFO root] Cache enabled
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [prometheus INFO root] starting metric collection thread
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [prometheus INFO root] Starting engine...
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.error] [08/Oct/2025:09:47:47] ENGINE Bus STARTING
Oct  8 05:47:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: [08/Oct/2025:09:47:47] ENGINE Bus STARTING
Oct  8 05:47:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: CherryPy Checker:
Oct  8 05:47:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: The Application mounted at '' has an empty config.
Oct  8 05:47:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916867816265, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 939999, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7494, "largest_seqno": 8017, "table_properties": {"data_size": 936979, "index_size": 864, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8456, "raw_average_key_size": 20, "raw_value_size": 930271, "raw_average_value_size": 2220, "num_data_blocks": 37, "num_entries": 419, "num_filter_entries": 419, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916853, "oldest_key_time": 1759916853, "file_creation_time": 1759916867, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 27093 microseconds, and 4668 cpu microseconds.
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.816336) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 939999 bytes OK
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.816372) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.820257) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.820275) EVENT_LOG_v1 {"time_micros": 1759916867820269, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.820303) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 941087, prev total WAL file size 941087, number of live WAL files 2.
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.820915) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(917KB)], [20(11MB)]
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916867821147, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 13217889, "oldest_snapshot_seqno": -1}
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] recovery thread starting
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] starting setup
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"} v 0)
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"}]: dispatch
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: rbd_support
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: restful
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [restful INFO root] server_addr: :: server_port: 8003
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [restful WARNING root] server not running: no certificate configured
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: status
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: telemetry
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] PerfHandler: starting
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_task_task: vms, start_after=
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_task_task: volumes, start_after=
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: mgr load Constructed class from module: volumes
Oct  8 05:47:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:47.920+0000 7fa0ced58640 -1 client.0 error registering admin socket command: (17) File exists
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: client.0 error registering admin socket command: (17) File exists
Oct  8 05:47:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:47.920+0000 7fa0d4d64640 -1 client.0 error registering admin socket command: (17) File exists
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: client.0 error registering admin socket command: (17) File exists
Oct  8 05:47:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:47.920+0000 7fa0d4d64640 -1 client.0 error registering admin socket command: (17) File exists
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: client.0 error registering admin socket command: (17) File exists
Oct  8 05:47:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:47.920+0000 7fa0d4d64640 -1 client.0 error registering admin socket command: (17) File exists
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: client.0 error registering admin socket command: (17) File exists
Oct  8 05:47:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:47.920+0000 7fa0d4d64640 -1 client.0 error registering admin socket command: (17) File exists
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: client.0 error registering admin socket command: (17) File exists
Oct  8 05:47:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T09:47:47.920+0000 7fa0d4d64640 -1 client.0 error registering admin socket command: (17) File exists
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: client.0 error registering admin socket command: (17) File exists
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_task_task: backups, start_after=
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_task_task: images, start_after=
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3148 keys, 12002416 bytes, temperature: kUnknown
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916867946561, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 12002416, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11977584, "index_size": 15891, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7877, "raw_key_size": 81561, "raw_average_key_size": 25, "raw_value_size": 11915700, "raw_average_value_size": 3785, "num_data_blocks": 691, "num_entries": 3148, "num_filter_entries": 3148, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759916867, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.946819) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 12002416 bytes
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.954943) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.4 rd, 95.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.7 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(26.8) write-amplify(12.8) OK, records in: 3672, records dropped: 524 output_compression: NoCompression
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.955027) EVENT_LOG_v1 {"time_micros": 1759916867954988, "job": 6, "event": "compaction_finished", "compaction_time_micros": 125452, "compaction_time_cpu_micros": 28598, "output_level": 6, "num_output_files": 1, "total_output_size": 12002416, "num_input_records": 3672, "num_output_records": 3148, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916867955531, "job": 6, "event": "table_file_deletion", "file_number": 22}
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916867958163, "job": 6, "event": "table_file_deletion", "file_number": 20}
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.820832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.958344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.958352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.958353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.958477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:47:47.958487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TaskHandler: starting
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"} v 0)
Oct  8 05:47:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"}]: dispatch
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Oct  8 05:47:47 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Oct  8 05:47:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:47:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: [08/Oct/2025:09:47:48] ENGINE Serving on http://:::9283
Oct  8 05:47:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:47:48 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.error] [08/Oct/2025:09:47:48] ENGINE Serving on http://:::9283
Oct  8 05:47:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: [08/Oct/2025:09:47:48] ENGINE Bus STARTED
Oct  8 05:47:48 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.error] [08/Oct/2025:09:47:48] ENGINE Bus STARTED
Oct  8 05:47:48 np0005475493 ceph-mgr[73869]: [prometheus INFO root] Engine started.
Oct  8 05:47:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct  8 05:47:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] setup complete
Oct  8 05:47:48 np0005475493 systemd-logind[798]: New session 39 of user ceph-admin.
Oct  8 05:47:48 np0005475493 systemd[1]: Started Session 39 of User ceph-admin.
Oct  8 05:47:48 np0005475493 ceph-mgr[73869]: [dashboard INFO dashboard.module] Engine started.
Oct  8 05:47:48 np0005475493 python3.9[99710]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:47:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:47:48 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Oct  8 05:47:48 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Oct  8 05:47:48 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.ixicfj(active, since 1.22103s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct  8 05:47:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v3: 353 pgs: 353 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:47:48 np0005475493 ceph-mon[73572]: Active manager daemon compute-0.ixicfj restarted
Oct  8 05:47:48 np0005475493 ceph-mon[73572]: Activating manager daemon compute-0.ixicfj
Oct  8 05:47:48 np0005475493 ceph-mon[73572]: Manager daemon compute-0.ixicfj is now available
Oct  8 05:47:48 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:48 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/mirror_snapshot_schedule"}]: dispatch
Oct  8 05:47:48 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ixicfj/trash_purge_schedule"}]: dispatch
Oct  8 05:47:48 np0005475493 podman[99954]: 2025-10-08 09:47:48.896263915 +0000 UTC m=+0.112956965 container exec 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  8 05:47:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:48 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:48.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:49 np0005475493 podman[99954]: 2025-10-08 09:47:49.003328302 +0000 UTC m=+0.220021352 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:47:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:49.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:49 np0005475493 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:47:49] ENGINE Bus STARTING
Oct  8 05:47:49 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:47:49] ENGINE Bus STARTING
Oct  8 05:47:49 np0005475493 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:47:49] ENGINE Serving on http://192.168.122.100:8765
Oct  8 05:47:49 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:47:49] ENGINE Serving on http://192.168.122.100:8765
Oct  8 05:47:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:49 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:49 np0005475493 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:47:49] ENGINE Serving on https://192.168.122.100:7150
Oct  8 05:47:49 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:47:49] ENGINE Serving on https://192.168.122.100:7150
Oct  8 05:47:49 np0005475493 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:47:49] ENGINE Bus STARTED
Oct  8 05:47:49 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:47:49] ENGINE Bus STARTED
Oct  8 05:47:49 np0005475493 ceph-mgr[73869]: [cephadm INFO cherrypy.error] [08/Oct/2025:09:47:49] ENGINE Client ('192.168.122.100', 43154) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  8 05:47:49 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : [08/Oct/2025:09:47:49] ENGINE Client ('192.168.122.100', 43154) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  8 05:47:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:49 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618003770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:49 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Oct  8 05:47:49 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Oct  8 05:47:49 np0005475493 podman[100141]: 2025-10-08 09:47:49.551521333 +0000 UTC m=+0.067864491 container exec 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:49 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v4: 353 pgs: 353 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:47:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Oct  8 05:47:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  8 05:47:49 np0005475493 podman[100166]: 2025-10-08 09:47:49.622223014 +0000 UTC m=+0.054991676 container exec_died 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:49 np0005475493 podman[100141]: 2025-10-08 09:47:49.633420117 +0000 UTC m=+0.149763285 container exec_died 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Oct  8 05:47:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  8 05:47:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Oct  8 05:47:49 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Oct  8 05:47:49 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 71 pg[9.16( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=71 pruub=8.475421906s) [0] r=-1 lpr=71 pi=[54,71)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 206.314956665s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:49 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 71 pg[9.16( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=71 pruub=8.475279808s) [0] r=-1 lpr=71 pi=[54,71)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.314956665s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:49 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 71 pg[9.e( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=71 pruub=8.475381851s) [0] r=-1 lpr=71 pi=[54,71)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 206.315155029s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:49 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 71 pg[9.e( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=71 pruub=8.475350380s) [0] r=-1 lpr=71 pi=[54,71)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.315155029s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:49 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 71 pg[9.6( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=71 pruub=8.477218628s) [0] r=-1 lpr=71 pi=[54,71)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 206.317718506s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:49 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 71 pg[9.6( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=71 pruub=8.477195740s) [0] r=-1 lpr=71 pi=[54,71)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.317718506s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:49 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 71 pg[9.1e( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=71 pruub=8.477206230s) [0] r=-1 lpr=71 pi=[54,71)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 206.318115234s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:49 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 71 pg[9.1e( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=71 pruub=8.477184296s) [0] r=-1 lpr=71 pi=[54,71)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.318115234s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:49 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  8 05:47:49 np0005475493 ceph-mgr[73869]: [devicehealth INFO root] Check health
Oct  8 05:47:49 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.ixicfj(active, since 2s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct  8 05:47:49 np0005475493 podman[100227]: 2025-10-08 09:47:49.921737362 +0000 UTC m=+0.073714547 container exec c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:47:49 np0005475493 podman[100227]: 2025-10-08 09:47:49.942344411 +0000 UTC m=+0.094321596 container exec_died c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:47:50 np0005475493 podman[100294]: 2025-10-08 09:47:50.184937604 +0000 UTC m=+0.071866099 container exec 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct  8 05:47:50 np0005475493 podman[100294]: 2025-10-08 09:47:50.194549517 +0000 UTC m=+0.081478032 container exec_died 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct  8 05:47:50 np0005475493 podman[100358]: 2025-10-08 09:47:50.442824208 +0000 UTC m=+0.061049417 container exec 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, version=2.2.4, architecture=x86_64, name=keepalived, description=keepalived for Ceph, io.openshift.expose-services=, release=1793, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc.)
Oct  8 05:47:50 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Oct  8 05:47:50 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Oct  8 05:47:50 np0005475493 podman[100358]: 2025-10-08 09:47:50.482302883 +0000 UTC m=+0.100527992 container exec_died 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, io.openshift.expose-services=, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, architecture=x86_64, build-date=2023-02-22T09:23:20)
Oct  8 05:47:50 np0005475493 podman[100477]: 2025-10-08 09:47:50.723908784 +0000 UTC m=+0.064163814 container exec 8d342ba820880b4150da0520ca2c2bd15e2dae173214d906ae46d290c18c1a1e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:50 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Oct  8 05:47:50 np0005475493 podman[100477]: 2025-10-08 09:47:50.768499491 +0000 UTC m=+0.108754531 container exec_died 8d342ba820880b4150da0520ca2c2bd15e2dae173214d906ae46d290c18c1a1e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:50 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Oct  8 05:47:50 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Oct  8 05:47:50 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 72 pg[9.e( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=72) [0]/[1] r=0 lpr=72 pi=[54,72)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:50 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 72 pg[9.1e( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=72) [0]/[1] r=0 lpr=72 pi=[54,72)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:50 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 72 pg[9.e( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=72) [0]/[1] r=0 lpr=72 pi=[54,72)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:50 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 72 pg[9.6( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=72) [0]/[1] r=0 lpr=72 pi=[54,72)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:50 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 72 pg[9.16( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=72) [0]/[1] r=0 lpr=72 pi=[54,72)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:50 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 72 pg[9.6( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=72) [0]/[1] r=0 lpr=72 pi=[54,72)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:50 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 72 pg[9.16( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=72) [0]/[1] r=0 lpr=72 pi=[54,72)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:50 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 72 pg[9.1e( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=72) [0]/[1] r=0 lpr=72 pi=[54,72)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  8 05:47:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:50 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:50 np0005475493 ceph-mon[73572]: [08/Oct/2025:09:47:49] ENGINE Bus STARTING
Oct  8 05:47:50 np0005475493 ceph-mon[73572]: [08/Oct/2025:09:47:49] ENGINE Serving on http://192.168.122.100:8765
Oct  8 05:47:50 np0005475493 ceph-mon[73572]: [08/Oct/2025:09:47:49] ENGINE Serving on https://192.168.122.100:7150
Oct  8 05:47:50 np0005475493 ceph-mon[73572]: [08/Oct/2025:09:47:49] ENGINE Bus STARTED
Oct  8 05:47:50 np0005475493 ceph-mon[73572]: [08/Oct/2025:09:47:49] ENGINE Client ('192.168.122.100', 43154) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  8 05:47:50 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  8 05:47:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:50.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:50 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:47:51 np0005475493 podman[100616]: 2025-10-08 09:47:51.005358282 +0000 UTC m=+0.066130087 container exec 56b2a7b6ecfb4c45a674b20b3f1d60f53e5d4fac0af3b6685d98e6cab4ce59f5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:47:51 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:47:51 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:51.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:51 np0005475493 python3.9[100636]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:47:51 np0005475493 podman[100616]: 2025-10-08 09:47:51.186738444 +0000 UTC m=+0.247510219 container exec_died 56b2a7b6ecfb4c45a674b20b3f1d60f53e5d4fac0af3b6685d98e6cab4ce59f5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:47:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:51 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:47:51 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Oct  8 05:47:51 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:47:51 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Oct  8 05:47:51 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:51 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:51 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v7: 353 pgs: 353 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:47:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Oct  8 05:47:51 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  8 05:47:51 np0005475493 podman[100739]: 2025-10-08 09:47:51.609477269 +0000 UTC m=+0.076108552 container exec 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:51 np0005475493 podman[100739]: 2025-10-08 09:47:51.651468883 +0000 UTC m=+0.118100146 container exec_died 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:47:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:47:51 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:47:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Oct  8 05:47:51 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:51 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  8 05:47:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Oct  8 05:47:51 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Oct  8 05:47:51 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:51 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:51 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:51 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:51 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  8 05:47:51 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:51 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:51 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.ixicfj(active, since 4s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:47:52 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 73 pg[9.1e( v 45'1018 (0'0,45'1018] local-lis/les=72/73 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=72) [0]/[1] async=[0] r=0 lpr=72 pi=[54,72)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:52 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 73 pg[9.e( v 45'1018 (0'0,45'1018] local-lis/les=72/73 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=72) [0]/[1] async=[0] r=0 lpr=72 pi=[54,72)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:52 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 73 pg[9.6( v 45'1018 (0'0,45'1018] local-lis/les=72/73 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=72) [0]/[1] async=[0] r=0 lpr=72 pi=[54,72)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:52 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 73 pg[9.16( v 45'1018 (0'0,45'1018] local-lis/les=72/73 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=72) [0]/[1] async=[0] r=0 lpr=72 pi=[54,72)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  8 05:47:52 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 10.13 deep-scrub starts
Oct  8 05:47:52 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 10.13 deep-scrub ok
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:47:52 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct  8 05:47:52 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct  8 05:47:52 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct  8 05:47:52 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct  8 05:47:52 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct  8 05:47:52 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct  8 05:47:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:52 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618003770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  8 05:47:52 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:47:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:52.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:53.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Oct  8 05:47:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:53 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Oct  8 05:47:53 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:47:53 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:47:53 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Oct  8 05:47:53 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 74 pg[9.16( v 45'1018 (0'0,45'1018] local-lis/les=72/73 n=4 ec=54/38 lis/c=72/54 les/c/f=73/56/0 sis=74 pruub=14.815571785s) [0] async=[0] r=-1 lpr=74 pi=[54,74)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 216.221969604s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:53 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 74 pg[9.16( v 45'1018 (0'0,45'1018] local-lis/les=72/73 n=4 ec=54/38 lis/c=72/54 les/c/f=73/56/0 sis=74 pruub=14.815517426s) [0] r=-1 lpr=74 pi=[54,74)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.221969604s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:53 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 74 pg[9.6( v 45'1018 (0'0,45'1018] local-lis/les=72/73 n=6 ec=54/38 lis/c=72/54 les/c/f=73/56/0 sis=74 pruub=14.815158844s) [0] async=[0] r=-1 lpr=74 pi=[54,74)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 216.221939087s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:53 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 74 pg[9.6( v 45'1018 (0'0,45'1018] local-lis/les=72/73 n=6 ec=54/38 lis/c=72/54 les/c/f=73/56/0 sis=74 pruub=14.815108299s) [0] r=-1 lpr=74 pi=[54,74)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.221939087s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:53 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 74 pg[9.1e( v 45'1018 (0'0,45'1018] local-lis/les=72/73 n=5 ec=54/38 lis/c=72/54 les/c/f=73/56/0 sis=74 pruub=14.810555458s) [0] async=[0] r=-1 lpr=74 pi=[54,74)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 216.217407227s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:53 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 74 pg[9.1e( v 45'1018 (0'0,45'1018] local-lis/les=72/73 n=5 ec=54/38 lis/c=72/54 les/c/f=73/56/0 sis=74 pruub=14.810498238s) [0] r=-1 lpr=74 pi=[54,74)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.217407227s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:53 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 74 pg[9.e( v 45'1018 (0'0,45'1018] local-lis/les=72/73 n=6 ec=54/38 lis/c=72/54 les/c/f=73/56/0 sis=74 pruub=14.814806938s) [0] async=[0] r=-1 lpr=74 pi=[54,74)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 216.221908569s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:47:53 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 74 pg[9.e( v 45'1018 (0'0,45'1018] local-lis/les=72/73 n=6 ec=54/38 lis/c=72/54 les/c/f=73/56/0 sis=74 pruub=14.814755440s) [0] r=-1 lpr=74 pi=[54,74)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.221908569s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:47:53 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:47:53 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:47:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:47:53 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.b scrub starts
Oct  8 05:47:53 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.b scrub ok
Oct  8 05:47:53 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:47:53 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:47:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:53 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:53 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v10: 353 pgs: 4 peering, 349 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 13 op/s; 54 B/s, 4 objects/s recovering
Oct  8 05:47:53 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:47:53 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:47:53 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:47:53 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:47:53 np0005475493 ceph-mon[73572]: Updating compute-0:/etc/ceph/ceph.conf
Oct  8 05:47:53 np0005475493 ceph-mon[73572]: Updating compute-1:/etc/ceph/ceph.conf
Oct  8 05:47:53 np0005475493 ceph-mon[73572]: Updating compute-2:/etc/ceph/ceph.conf
Oct  8 05:47:54 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:47:54 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:47:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Oct  8 05:47:54 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:47:54 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:47:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Oct  8 05:47:54 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Oct  8 05:47:54 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:47:54 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:47:54 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.12 scrub starts
Oct  8 05:47:54 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.12 scrub ok
Oct  8 05:47:54 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:47:54 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:47:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:47:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:47:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:47:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:47:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:54 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:54.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:54 np0005475493 ceph-mon[73572]: Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:47:54 np0005475493 ceph-mon[73572]: Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:47:54 np0005475493 ceph-mon[73572]: Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.conf
Oct  8 05:47:54 np0005475493 ceph-mon[73572]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:47:54 np0005475493 ceph-mon[73572]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:47:54 np0005475493 ceph-mon[73572]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  8 05:47:54 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:54 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:54 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:54 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:55.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:55 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618003770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:47:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:47:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 05:47:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 05:47:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 05:47:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 05:47:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 05:47:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:47:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:47:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:47:55 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.c scrub starts
Oct  8 05:47:55 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.c scrub ok
Oct  8 05:47:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:55 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:55 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v12: 353 pgs: 4 peering, 349 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 11 op/s; 45 B/s, 4 objects/s recovering
Oct  8 05:47:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:47:55] "GET /metrics HTTP/1.1" 200 46587 "" "Prometheus/2.51.0"
Oct  8 05:47:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:47:55] "GET /metrics HTTP/1.1" 200 46587 "" "Prometheus/2.51.0"
Oct  8 05:47:55 np0005475493 podman[101932]: 2025-10-08 09:47:55.826721856 +0000 UTC m=+0.045905658 container create f7e8890caa1958847a9f7520ad3fc0383afb1f10a2e986f3e3cff07227d74be9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mayer, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct  8 05:47:55 np0005475493 systemd[1]: Started libpod-conmon-f7e8890caa1958847a9f7520ad3fc0383afb1f10a2e986f3e3cff07227d74be9.scope.
Oct  8 05:47:55 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:47:55 np0005475493 podman[101932]: 2025-10-08 09:47:55.890076947 +0000 UTC m=+0.109260769 container init f7e8890caa1958847a9f7520ad3fc0383afb1f10a2e986f3e3cff07227d74be9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  8 05:47:55 np0005475493 podman[101932]: 2025-10-08 09:47:55.800381177 +0000 UTC m=+0.019564989 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:47:55 np0005475493 podman[101932]: 2025-10-08 09:47:55.900688656 +0000 UTC m=+0.119872448 container start f7e8890caa1958847a9f7520ad3fc0383afb1f10a2e986f3e3cff07227d74be9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mayer, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:47:55 np0005475493 priceless_mayer[101949]: 167 167
Oct  8 05:47:55 np0005475493 podman[101932]: 2025-10-08 09:47:55.905127179 +0000 UTC m=+0.124310981 container attach f7e8890caa1958847a9f7520ad3fc0383afb1f10a2e986f3e3cff07227d74be9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:47:55 np0005475493 systemd[1]: libpod-f7e8890caa1958847a9f7520ad3fc0383afb1f10a2e986f3e3cff07227d74be9.scope: Deactivated successfully.
Oct  8 05:47:55 np0005475493 podman[101932]: 2025-10-08 09:47:55.905690783 +0000 UTC m=+0.124874595 container died f7e8890caa1958847a9f7520ad3fc0383afb1f10a2e986f3e3cff07227d74be9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:47:55 np0005475493 systemd[1]: var-lib-containers-storage-overlay-79a4a504a07fa9c02ad08830b214ef315617fc4c2ee23688f067bad3aec25b07-merged.mount: Deactivated successfully.
Oct  8 05:47:55 np0005475493 podman[101932]: 2025-10-08 09:47:55.964261032 +0000 UTC m=+0.183444834 container remove f7e8890caa1958847a9f7520ad3fc0383afb1f10a2e986f3e3cff07227d74be9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct  8 05:47:55 np0005475493 systemd[1]: libpod-conmon-f7e8890caa1958847a9f7520ad3fc0383afb1f10a2e986f3e3cff07227d74be9.scope: Deactivated successfully.
Oct  8 05:47:55 np0005475493 ceph-mon[73572]: Updating compute-1:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:47:55 np0005475493 ceph-mon[73572]: Updating compute-0:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:47:55 np0005475493 ceph-mon[73572]: Updating compute-2:/var/lib/ceph/787292cc-8154-50c4-9e00-e9be3e817149/config/ceph.client.admin.keyring
Oct  8 05:47:55 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:55 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:55 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:55 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:55 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:47:56 np0005475493 podman[101973]: 2025-10-08 09:47:56.127556804 +0000 UTC m=+0.050840354 container create 467044f6c4f5f06d893d69ebbc94f370ec98bcd6232adaedf83e91347152ab11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  8 05:47:56 np0005475493 systemd[1]: Started libpod-conmon-467044f6c4f5f06d893d69ebbc94f370ec98bcd6232adaedf83e91347152ab11.scope.
Oct  8 05:47:56 np0005475493 podman[101973]: 2025-10-08 09:47:56.098547516 +0000 UTC m=+0.021831046 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:47:56 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:47:56 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14cfbb479dbfb60e4f48f5701dc0e799111f4417192c0db41b1ea8fa9a20c01c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:56 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14cfbb479dbfb60e4f48f5701dc0e799111f4417192c0db41b1ea8fa9a20c01c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:56 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14cfbb479dbfb60e4f48f5701dc0e799111f4417192c0db41b1ea8fa9a20c01c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:56 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14cfbb479dbfb60e4f48f5701dc0e799111f4417192c0db41b1ea8fa9a20c01c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:56 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14cfbb479dbfb60e4f48f5701dc0e799111f4417192c0db41b1ea8fa9a20c01c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:56 np0005475493 podman[101973]: 2025-10-08 09:47:56.225416841 +0000 UTC m=+0.148700371 container init 467044f6c4f5f06d893d69ebbc94f370ec98bcd6232adaedf83e91347152ab11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  8 05:47:56 np0005475493 podman[101973]: 2025-10-08 09:47:56.233373413 +0000 UTC m=+0.156656933 container start 467044f6c4f5f06d893d69ebbc94f370ec98bcd6232adaedf83e91347152ab11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:47:56 np0005475493 podman[101973]: 2025-10-08 09:47:56.237196931 +0000 UTC m=+0.160480461 container attach 467044f6c4f5f06d893d69ebbc94f370ec98bcd6232adaedf83e91347152ab11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:47:56 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Oct  8 05:47:56 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Oct  8 05:47:56 np0005475493 objective_grothendieck[101989]: --> passed data devices: 0 physical, 1 LVM
Oct  8 05:47:56 np0005475493 objective_grothendieck[101989]: --> All data devices are unavailable
Oct  8 05:47:56 np0005475493 systemd[1]: libpod-467044f6c4f5f06d893d69ebbc94f370ec98bcd6232adaedf83e91347152ab11.scope: Deactivated successfully.
Oct  8 05:47:56 np0005475493 podman[101973]: 2025-10-08 09:47:56.567941499 +0000 UTC m=+0.491225009 container died 467044f6c4f5f06d893d69ebbc94f370ec98bcd6232adaedf83e91347152ab11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:47:56 np0005475493 systemd[1]: var-lib-containers-storage-overlay-14cfbb479dbfb60e4f48f5701dc0e799111f4417192c0db41b1ea8fa9a20c01c-merged.mount: Deactivated successfully.
Oct  8 05:47:56 np0005475493 podman[101973]: 2025-10-08 09:47:56.633923386 +0000 UTC m=+0.557206906 container remove 467044f6c4f5f06d893d69ebbc94f370ec98bcd6232adaedf83e91347152ab11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_grothendieck, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:47:56 np0005475493 systemd[1]: libpod-conmon-467044f6c4f5f06d893d69ebbc94f370ec98bcd6232adaedf83e91347152ab11.scope: Deactivated successfully.
Oct  8 05:47:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:56 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct  8 05:47:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:56.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  8 05:47:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:57.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:57 np0005475493 podman[102107]: 2025-10-08 09:47:57.201773651 +0000 UTC m=+0.076001433 container create 6b6d4c6f5aaf14123ad7f65e7457d45eaa448bf92233a9ee276df85b9d4ef0a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_sutherland, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  8 05:47:57 np0005475493 podman[102107]: 2025-10-08 09:47:57.146735022 +0000 UTC m=+0.020962854 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:47:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:57 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:57 np0005475493 systemd[1]: Started libpod-conmon-6b6d4c6f5aaf14123ad7f65e7457d45eaa448bf92233a9ee276df85b9d4ef0a3.scope.
Oct  8 05:47:57 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:47:57 np0005475493 podman[102107]: 2025-10-08 09:47:57.304601585 +0000 UTC m=+0.178829387 container init 6b6d4c6f5aaf14123ad7f65e7457d45eaa448bf92233a9ee276df85b9d4ef0a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_sutherland, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:47:57 np0005475493 podman[102107]: 2025-10-08 09:47:57.310365721 +0000 UTC m=+0.184593503 container start 6b6d4c6f5aaf14123ad7f65e7457d45eaa448bf92233a9ee276df85b9d4ef0a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  8 05:47:57 np0005475493 determined_sutherland[102128]: 167 167
Oct  8 05:47:57 np0005475493 systemd[1]: libpod-6b6d4c6f5aaf14123ad7f65e7457d45eaa448bf92233a9ee276df85b9d4ef0a3.scope: Deactivated successfully.
Oct  8 05:47:57 np0005475493 conmon[102128]: conmon 6b6d4c6f5aaf14123ad7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6b6d4c6f5aaf14123ad7f65e7457d45eaa448bf92233a9ee276df85b9d4ef0a3.scope/container/memory.events
Oct  8 05:47:57 np0005475493 podman[102107]: 2025-10-08 09:47:57.333191502 +0000 UTC m=+0.207419294 container attach 6b6d4c6f5aaf14123ad7f65e7457d45eaa448bf92233a9ee276df85b9d4ef0a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_sutherland, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct  8 05:47:57 np0005475493 podman[102107]: 2025-10-08 09:47:57.333827678 +0000 UTC m=+0.208055460 container died 6b6d4c6f5aaf14123ad7f65e7457d45eaa448bf92233a9ee276df85b9d4ef0a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct  8 05:47:57 np0005475493 systemd[1]: var-lib-containers-storage-overlay-34a9f051e7fbadf65077511d5353aa3ea5a412bdf8e0a373c27924e7a37b6e72-merged.mount: Deactivated successfully.
Oct  8 05:47:57 np0005475493 podman[102107]: 2025-10-08 09:47:57.381005857 +0000 UTC m=+0.255233639 container remove 6b6d4c6f5aaf14123ad7f65e7457d45eaa448bf92233a9ee276df85b9d4ef0a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  8 05:47:57 np0005475493 systemd[1]: libpod-conmon-6b6d4c6f5aaf14123ad7f65e7457d45eaa448bf92233a9ee276df85b9d4ef0a3.scope: Deactivated successfully.
Oct  8 05:47:57 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.8 scrub starts
Oct  8 05:47:57 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.8 scrub ok
Oct  8 05:47:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:57 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66180038f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:57 np0005475493 podman[102155]: 2025-10-08 09:47:57.584811898 +0000 UTC m=+0.053737156 container create ce5bc5a496b7865775708af7465b26e38387674762ca9011df2af2f3cc324d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:47:57 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v13: 353 pgs: 4 peering, 349 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 9 op/s; 36 B/s, 3 objects/s recovering
Oct  8 05:47:57 np0005475493 systemd[1]: Started libpod-conmon-ce5bc5a496b7865775708af7465b26e38387674762ca9011df2af2f3cc324d2f.scope.
Oct  8 05:47:57 np0005475493 podman[102155]: 2025-10-08 09:47:57.55971678 +0000 UTC m=+0.028642058 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:47:57 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:47:57 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d75dc9dc4dcbc3a1ca8bf441431beee6e4f690793d1f16a71e6a7dbb8590a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:57 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d75dc9dc4dcbc3a1ca8bf441431beee6e4f690793d1f16a71e6a7dbb8590a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:57 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d75dc9dc4dcbc3a1ca8bf441431beee6e4f690793d1f16a71e6a7dbb8590a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:57 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d75dc9dc4dcbc3a1ca8bf441431beee6e4f690793d1f16a71e6a7dbb8590a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:57 np0005475493 podman[102155]: 2025-10-08 09:47:57.684192895 +0000 UTC m=+0.153118183 container init ce5bc5a496b7865775708af7465b26e38387674762ca9011df2af2f3cc324d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_lamarr, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:47:57 np0005475493 podman[102155]: 2025-10-08 09:47:57.690471145 +0000 UTC m=+0.159396393 container start ce5bc5a496b7865775708af7465b26e38387674762ca9011df2af2f3cc324d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_lamarr, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:47:57 np0005475493 podman[102155]: 2025-10-08 09:47:57.696304272 +0000 UTC m=+0.165229620 container attach ce5bc5a496b7865775708af7465b26e38387674762ca9011df2af2f3cc324d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]: {
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:    "1": [
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:        {
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:            "devices": [
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:                "/dev/loop3"
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:            ],
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:            "lv_name": "ceph_lv0",
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:            "lv_size": "21470642176",
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:            "name": "ceph_lv0",
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:            "tags": {
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:                "ceph.cephx_lockbox_secret": "",
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:                "ceph.cluster_name": "ceph",
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:                "ceph.crush_device_class": "",
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:                "ceph.encrypted": "0",
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:                "ceph.osd_id": "1",
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:                "ceph.type": "block",
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:                "ceph.vdo": "0",
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:                "ceph.with_tpm": "0"
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:            },
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:            "type": "block",
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:            "vg_name": "ceph_vg0"
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:        }
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]:    ]
Oct  8 05:47:57 np0005475493 blissful_lamarr[102172]: }
Oct  8 05:47:58 np0005475493 systemd[1]: libpod-ce5bc5a496b7865775708af7465b26e38387674762ca9011df2af2f3cc324d2f.scope: Deactivated successfully.
Oct  8 05:47:58 np0005475493 podman[102155]: 2025-10-08 09:47:58.01051791 +0000 UTC m=+0.479443158 container died ce5bc5a496b7865775708af7465b26e38387674762ca9011df2af2f3cc324d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  8 05:47:58 np0005475493 systemd[1]: var-lib-containers-storage-overlay-58d75dc9dc4dcbc3a1ca8bf441431beee6e4f690793d1f16a71e6a7dbb8590a1-merged.mount: Deactivated successfully.
Oct  8 05:47:58 np0005475493 podman[102155]: 2025-10-08 09:47:58.155103776 +0000 UTC m=+0.624029024 container remove ce5bc5a496b7865775708af7465b26e38387674762ca9011df2af2f3cc324d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_lamarr, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:47:58 np0005475493 systemd[1]: libpod-conmon-ce5bc5a496b7865775708af7465b26e38387674762ca9011df2af2f3cc324d2f.scope: Deactivated successfully.
Oct  8 05:47:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:47:58 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Oct  8 05:47:58 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Oct  8 05:47:58 np0005475493 systemd[1]: session-38.scope: Deactivated successfully.
Oct  8 05:47:58 np0005475493 systemd[1]: session-38.scope: Consumed 8.080s CPU time.
Oct  8 05:47:58 np0005475493 systemd-logind[798]: Session 38 logged out. Waiting for processes to exit.
Oct  8 05:47:58 np0005475493 systemd-logind[798]: Removed session 38.
Oct  8 05:47:58 np0005475493 podman[102312]: 2025-10-08 09:47:58.787237935 +0000 UTC m=+0.057096933 container create e191ec23407f294c549ecb805a67db34524d8b6e78c8322e80615488bbf62ff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_noether, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  8 05:47:58 np0005475493 systemd[1]: Started libpod-conmon-e191ec23407f294c549ecb805a67db34524d8b6e78c8322e80615488bbf62ff0.scope.
Oct  8 05:47:58 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:47:58 np0005475493 podman[102312]: 2025-10-08 09:47:58.767166875 +0000 UTC m=+0.037025913 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:47:58 np0005475493 podman[102312]: 2025-10-08 09:47:58.86572226 +0000 UTC m=+0.135581258 container init e191ec23407f294c549ecb805a67db34524d8b6e78c8322e80615488bbf62ff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_noether, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:47:58 np0005475493 podman[102312]: 2025-10-08 09:47:58.873858667 +0000 UTC m=+0.143717655 container start e191ec23407f294c549ecb805a67db34524d8b6e78c8322e80615488bbf62ff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:47:58 np0005475493 podman[102312]: 2025-10-08 09:47:58.877258564 +0000 UTC m=+0.147117552 container attach e191ec23407f294c549ecb805a67db34524d8b6e78c8322e80615488bbf62ff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_noether, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:47:58 np0005475493 objective_noether[102330]: 167 167
Oct  8 05:47:58 np0005475493 systemd[1]: libpod-e191ec23407f294c549ecb805a67db34524d8b6e78c8322e80615488bbf62ff0.scope: Deactivated successfully.
Oct  8 05:47:58 np0005475493 podman[102312]: 2025-10-08 09:47:58.879648454 +0000 UTC m=+0.149507442 container died e191ec23407f294c549ecb805a67db34524d8b6e78c8322e80615488bbf62ff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:47:58 np0005475493 systemd[1]: var-lib-containers-storage-overlay-d74f7219d7bd527352181f4312edc2e4f72c2ee5fc73abbf4a32434fbf1b18a1-merged.mount: Deactivated successfully.
Oct  8 05:47:58 np0005475493 podman[102312]: 2025-10-08 09:47:58.918403109 +0000 UTC m=+0.188262097 container remove e191ec23407f294c549ecb805a67db34524d8b6e78c8322e80615488bbf62ff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:47:58 np0005475493 systemd[1]: libpod-conmon-e191ec23407f294c549ecb805a67db34524d8b6e78c8322e80615488bbf62ff0.scope: Deactivated successfully.
Oct  8 05:47:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:58 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:47:58.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:47:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:47:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:47:59.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:47:59 np0005475493 podman[102354]: 2025-10-08 09:47:59.076065428 +0000 UTC m=+0.043973650 container create c4d531b8593284c444a7ca8da083ac901426478775d6d36c9cc1b4d8bcf140bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_moser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:47:59 np0005475493 systemd[1]: Started libpod-conmon-c4d531b8593284c444a7ca8da083ac901426478775d6d36c9cc1b4d8bcf140bd.scope.
Oct  8 05:47:59 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:47:59 np0005475493 podman[102354]: 2025-10-08 09:47:59.054477218 +0000 UTC m=+0.022385480 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:47:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a814274e209a7fd6fc0a84b7c30f817e2090b63a3f08b70f4af2ae31fa994e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a814274e209a7fd6fc0a84b7c30f817e2090b63a3f08b70f4af2ae31fa994e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a814274e209a7fd6fc0a84b7c30f817e2090b63a3f08b70f4af2ae31fa994e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a814274e209a7fd6fc0a84b7c30f817e2090b63a3f08b70f4af2ae31fa994e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:47:59 np0005475493 podman[102354]: 2025-10-08 09:47:59.170833807 +0000 UTC m=+0.138742049 container init c4d531b8593284c444a7ca8da083ac901426478775d6d36c9cc1b4d8bcf140bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_moser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:47:59 np0005475493 podman[102354]: 2025-10-08 09:47:59.179928618 +0000 UTC m=+0.147836840 container start c4d531b8593284c444a7ca8da083ac901426478775d6d36c9cc1b4d8bcf140bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_moser, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:47:59 np0005475493 podman[102354]: 2025-10-08 09:47:59.184184646 +0000 UTC m=+0.152092868 container attach c4d531b8593284c444a7ca8da083ac901426478775d6d36c9cc1b4d8bcf140bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct  8 05:47:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:59 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:59 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Oct  8 05:47:59 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Oct  8 05:47:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:47:59 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:47:59 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v14: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3 op/s; 28 B/s, 2 objects/s recovering
Oct  8 05:47:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Oct  8 05:47:59 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  8 05:47:59 np0005475493 lvm[102444]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 05:47:59 np0005475493 lvm[102444]: VG ceph_vg0 finished
Oct  8 05:47:59 np0005475493 sharp_moser[102370]: {}
Oct  8 05:47:59 np0005475493 systemd[1]: libpod-c4d531b8593284c444a7ca8da083ac901426478775d6d36c9cc1b4d8bcf140bd.scope: Deactivated successfully.
Oct  8 05:47:59 np0005475493 systemd[1]: libpod-c4d531b8593284c444a7ca8da083ac901426478775d6d36c9cc1b4d8bcf140bd.scope: Consumed 1.110s CPU time.
Oct  8 05:47:59 np0005475493 podman[102354]: 2025-10-08 09:47:59.869939088 +0000 UTC m=+0.837847330 container died c4d531b8593284c444a7ca8da083ac901426478775d6d36c9cc1b4d8bcf140bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_moser, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  8 05:47:59 np0005475493 systemd[1]: var-lib-containers-storage-overlay-6a814274e209a7fd6fc0a84b7c30f817e2090b63a3f08b70f4af2ae31fa994e8-merged.mount: Deactivated successfully.
Oct  8 05:47:59 np0005475493 podman[102354]: 2025-10-08 09:47:59.913419654 +0000 UTC m=+0.881327896 container remove c4d531b8593284c444a7ca8da083ac901426478775d6d36c9cc1b4d8bcf140bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  8 05:47:59 np0005475493 systemd[1]: libpod-conmon-c4d531b8593284c444a7ca8da083ac901426478775d6d36c9cc1b4d8bcf140bd.scope: Deactivated successfully.
Oct  8 05:47:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:47:59 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:47:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:47:59 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:00 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Oct  8 05:48:00 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:48:00 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct  8 05:48:00 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct  8 05:48:00 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.1c scrub starts
Oct  8 05:48:00 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.1c scrub ok
Oct  8 05:48:00 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 76 pg[9.8( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=76 pruub=13.778754234s) [2] r=-1 lpr=76 pi=[54,76)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 222.315643311s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:00 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 76 pg[9.8( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=76 pruub=13.778483391s) [2] r=-1 lpr=76 pi=[54,76)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.315643311s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:48:00 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 76 pg[9.18( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=76 pruub=13.780656815s) [2] r=-1 lpr=76 pi=[54,76)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 222.318374634s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:00 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 76 pg[9.18( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=76 pruub=13.780499458s) [2] r=-1 lpr=76 pi=[54,76)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.318374634s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:48:00 np0005475493 podman[102576]: 2025-10-08 09:48:00.729937251 +0000 UTC m=+0.054365394 container create 7555f7dc3c10b3d9aecfa8e871434e441cbcc962863cf0518eaae03a2f33853c (image=quay.io/ceph/ceph:v19, name=xenodochial_lalande, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:48:00 np0005475493 systemd[1]: Started libpod-conmon-7555f7dc3c10b3d9aecfa8e871434e441cbcc962863cf0518eaae03a2f33853c.scope.
Oct  8 05:48:00 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:48:00 np0005475493 podman[102576]: 2025-10-08 09:48:00.7086864 +0000 UTC m=+0.033114633 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:48:00 np0005475493 podman[102576]: 2025-10-08 09:48:00.813918665 +0000 UTC m=+0.138346818 container init 7555f7dc3c10b3d9aecfa8e871434e441cbcc962863cf0518eaae03a2f33853c (image=quay.io/ceph/ceph:v19, name=xenodochial_lalande, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:48:00 np0005475493 podman[102576]: 2025-10-08 09:48:00.825017728 +0000 UTC m=+0.149445871 container start 7555f7dc3c10b3d9aecfa8e871434e441cbcc962863cf0518eaae03a2f33853c (image=quay.io/ceph/ceph:v19, name=xenodochial_lalande, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:48:00 np0005475493 podman[102576]: 2025-10-08 09:48:00.828751013 +0000 UTC m=+0.153179176 container attach 7555f7dc3c10b3d9aecfa8e871434e441cbcc962863cf0518eaae03a2f33853c (image=quay.io/ceph/ceph:v19, name=xenodochial_lalande, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:48:00 np0005475493 xenodochial_lalande[102593]: 167 167
Oct  8 05:48:00 np0005475493 systemd[1]: libpod-7555f7dc3c10b3d9aecfa8e871434e441cbcc962863cf0518eaae03a2f33853c.scope: Deactivated successfully.
Oct  8 05:48:00 np0005475493 podman[102576]: 2025-10-08 09:48:00.833717029 +0000 UTC m=+0.158145202 container died 7555f7dc3c10b3d9aecfa8e871434e441cbcc962863cf0518eaae03a2f33853c (image=quay.io/ceph/ceph:v19, name=xenodochial_lalande, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  8 05:48:00 np0005475493 systemd[1]: var-lib-containers-storage-overlay-af9abe252850334ceae13b54661740285ff6e5db379db9340334795f26a8d33d-merged.mount: Deactivated successfully.
Oct  8 05:48:00 np0005475493 podman[102576]: 2025-10-08 09:48:00.87942181 +0000 UTC m=+0.203849953 container remove 7555f7dc3c10b3d9aecfa8e871434e441cbcc962863cf0518eaae03a2f33853c (image=quay.io/ceph/ceph:v19, name=xenodochial_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Oct  8 05:48:00 np0005475493 systemd[1]: libpod-conmon-7555f7dc3c10b3d9aecfa8e871434e441cbcc962863cf0518eaae03a2f33853c.scope: Deactivated successfully.
Oct  8 05:48:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:00 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66180038f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:00 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.ixicfj (monmap changed)...
Oct  8 05:48:00 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.ixicfj (monmap changed)...
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.ixicfj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ixicfj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:48:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:48:00 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.ixicfj on compute-0
Oct  8 05:48:00 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.ixicfj on compute-0
Oct  8 05:48:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:00.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:01.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:01 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Oct  8 05:48:01 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  8 05:48:01 np0005475493 ceph-mon[73572]: Reconfiguring mon.compute-0 (monmap changed)...
Oct  8 05:48:01 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  8 05:48:01 np0005475493 ceph-mon[73572]: Reconfiguring daemon mon.compute-0 on compute-0
Oct  8 05:48:01 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:01 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:01 np0005475493 ceph-mon[73572]: Reconfiguring mgr.compute-0.ixicfj (monmap changed)...
Oct  8 05:48:01 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ixicfj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  8 05:48:01 np0005475493 ceph-mon[73572]: Reconfiguring daemon mgr.compute-0.ixicfj on compute-0
Oct  8 05:48:01 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Oct  8 05:48:01 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Oct  8 05:48:01 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 77 pg[9.8( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=77) [2]/[1] r=0 lpr=77 pi=[54,77)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:01 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 77 pg[9.8( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=77) [2]/[1] r=0 lpr=77 pi=[54,77)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  8 05:48:01 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 77 pg[9.18( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=77) [2]/[1] r=0 lpr=77 pi=[54,77)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:01 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 77 pg[9.18( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=77) [2]/[1] r=0 lpr=77 pi=[54,77)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  8 05:48:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:01 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:01 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Oct  8 05:48:01 np0005475493 podman[102680]: 2025-10-08 09:48:01.431996587 +0000 UTC m=+0.041797243 container create afc0290718592cb82dbb196acb4ab264d61c7eaf2c9640f539abb58dea63d6f9 (image=quay.io/ceph/ceph:v19, name=amazing_hodgkin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:48:01 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Oct  8 05:48:01 np0005475493 systemd[1]: Started libpod-conmon-afc0290718592cb82dbb196acb4ab264d61c7eaf2c9640f539abb58dea63d6f9.scope.
Oct  8 05:48:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:01 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:01 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:48:01 np0005475493 podman[102680]: 2025-10-08 09:48:01.501807022 +0000 UTC m=+0.111607698 container init afc0290718592cb82dbb196acb4ab264d61c7eaf2c9640f539abb58dea63d6f9 (image=quay.io/ceph/ceph:v19, name=amazing_hodgkin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  8 05:48:01 np0005475493 podman[102680]: 2025-10-08 09:48:01.413506708 +0000 UTC m=+0.023307394 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  8 05:48:01 np0005475493 podman[102680]: 2025-10-08 09:48:01.509796465 +0000 UTC m=+0.119597131 container start afc0290718592cb82dbb196acb4ab264d61c7eaf2c9640f539abb58dea63d6f9 (image=quay.io/ceph/ceph:v19, name=amazing_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:48:01 np0005475493 amazing_hodgkin[102696]: 167 167
Oct  8 05:48:01 np0005475493 systemd[1]: libpod-afc0290718592cb82dbb196acb4ab264d61c7eaf2c9640f539abb58dea63d6f9.scope: Deactivated successfully.
Oct  8 05:48:01 np0005475493 podman[102680]: 2025-10-08 09:48:01.530901572 +0000 UTC m=+0.140702238 container attach afc0290718592cb82dbb196acb4ab264d61c7eaf2c9640f539abb58dea63d6f9 (image=quay.io/ceph/ceph:v19, name=amazing_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct  8 05:48:01 np0005475493 podman[102680]: 2025-10-08 09:48:01.531936118 +0000 UTC m=+0.141736784 container died afc0290718592cb82dbb196acb4ab264d61c7eaf2c9640f539abb58dea63d6f9 (image=quay.io/ceph/ceph:v19, name=amazing_hodgkin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  8 05:48:01 np0005475493 systemd[1]: var-lib-containers-storage-overlay-4e2e9c715cc8566725ca229240ac2c29167039bea26234fbb389363c4e4f2435-merged.mount: Deactivated successfully.
Oct  8 05:48:01 np0005475493 podman[102680]: 2025-10-08 09:48:01.570073298 +0000 UTC m=+0.179873954 container remove afc0290718592cb82dbb196acb4ab264d61c7eaf2c9640f539abb58dea63d6f9 (image=quay.io/ceph/ceph:v19, name=amazing_hodgkin, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  8 05:48:01 np0005475493 systemd[1]: libpod-conmon-afc0290718592cb82dbb196acb4ab264d61c7eaf2c9640f539abb58dea63d6f9.scope: Deactivated successfully.
Oct  8 05:48:01 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v17: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:48:01 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Oct  8 05:48:01 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  8 05:48:01 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:48:01 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:01 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:48:01 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:01 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Oct  8 05:48:01 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Oct  8 05:48:01 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct  8 05:48:01 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  8 05:48:01 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:48:01 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:48:01 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Oct  8 05:48:01 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Oct  8 05:48:02 np0005475493 podman[102784]: 2025-10-08 09:48:02.05914684 +0000 UTC m=+0.079101222 container create b5d180217ae66af6f07d5bed36b101c6e56ebe99a6c18511c1726dc27d8c8a18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:48:02 np0005475493 podman[102784]: 2025-10-08 09:48:01.999644577 +0000 UTC m=+0.019598989 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:48:02 np0005475493 systemd[1]: Started libpod-conmon-b5d180217ae66af6f07d5bed36b101c6e56ebe99a6c18511c1726dc27d8c8a18.scope.
Oct  8 05:48:02 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:48:02 np0005475493 podman[102784]: 2025-10-08 09:48:02.13267828 +0000 UTC m=+0.152632662 container init b5d180217ae66af6f07d5bed36b101c6e56ebe99a6c18511c1726dc27d8c8a18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_varahamihira, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  8 05:48:02 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  8 05:48:02 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:02 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:02 np0005475493 ceph-mon[73572]: Reconfiguring crash.compute-0 (monmap changed)...
Oct  8 05:48:02 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  8 05:48:02 np0005475493 ceph-mon[73572]: Reconfiguring daemon crash.compute-0 on compute-0
Oct  8 05:48:02 np0005475493 podman[102784]: 2025-10-08 09:48:02.137786169 +0000 UTC m=+0.157740551 container start b5d180217ae66af6f07d5bed36b101c6e56ebe99a6c18511c1726dc27d8c8a18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_varahamihira, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:48:02 np0005475493 serene_varahamihira[102800]: 167 167
Oct  8 05:48:02 np0005475493 systemd[1]: libpod-b5d180217ae66af6f07d5bed36b101c6e56ebe99a6c18511c1726dc27d8c8a18.scope: Deactivated successfully.
Oct  8 05:48:02 np0005475493 podman[102784]: 2025-10-08 09:48:02.150187345 +0000 UTC m=+0.170141727 container attach b5d180217ae66af6f07d5bed36b101c6e56ebe99a6c18511c1726dc27d8c8a18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_varahamihira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:48:02 np0005475493 podman[102784]: 2025-10-08 09:48:02.150473281 +0000 UTC m=+0.170427663 container died b5d180217ae66af6f07d5bed36b101c6e56ebe99a6c18511c1726dc27d8c8a18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_varahamihira, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:48:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Oct  8 05:48:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  8 05:48:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Oct  8 05:48:02 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Oct  8 05:48:02 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 78 pg[9.9( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=78 pruub=12.021979332s) [2] r=-1 lpr=78 pi=[54,78)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 222.315689087s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:02 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 78 pg[9.9( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=78 pruub=12.021944046s) [2] r=-1 lpr=78 pi=[54,78)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.315689087s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:48:02 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 78 pg[9.19( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=78 pruub=12.022686005s) [2] r=-1 lpr=78 pi=[54,78)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 222.318298340s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:02 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 78 pg[9.19( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=78 pruub=12.022484779s) [2] r=-1 lpr=78 pi=[54,78)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.318298340s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:48:02 np0005475493 systemd[1]: var-lib-containers-storage-overlay-10e3c527bb0fe897c9cd523253bdd42d01dc559f3bb5f5914ddc00ef0f60a6e1-merged.mount: Deactivated successfully.
Oct  8 05:48:02 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 78 pg[9.8( v 45'1018 (0'0,45'1018] local-lis/les=77/78 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[54,77)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:48:02 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 78 pg[9.18( v 45'1018 (0'0,45'1018] local-lis/les=77/78 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[54,77)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:48:02 np0005475493 podman[102784]: 2025-10-08 09:48:02.372303091 +0000 UTC m=+0.392257473 container remove b5d180217ae66af6f07d5bed36b101c6e56ebe99a6c18511c1726dc27d8c8a18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_varahamihira, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:48:02 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.19 scrub starts
Oct  8 05:48:02 np0005475493 systemd[1]: libpod-conmon-b5d180217ae66af6f07d5bed36b101c6e56ebe99a6c18511c1726dc27d8c8a18.scope: Deactivated successfully.
Oct  8 05:48:02 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.19 scrub ok
Oct  8 05:48:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:48:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:48:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:02 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Oct  8 05:48:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Oct  8 05:48:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  8 05:48:02 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Oct  8 05:48:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:48:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:48:02 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-0
Oct  8 05:48:02 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-0
Oct  8 05:48:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=infra.usagestats t=2025-10-08T09:48:02.55793221Z level=info msg="Usage stats are ready to report"
Oct  8 05:48:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:48:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:48:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:02 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66180038f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:02.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct  8 05:48:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:03.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  8 05:48:03 np0005475493 podman[102881]: 2025-10-08 09:48:03.095850765 +0000 UTC m=+0.070650078 container create 0f55946eef5a01dd79d73c52883f5f826e0aa25190bc19c764145cfbb066ada1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_dirac, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct  8 05:48:03 np0005475493 systemd[1]: Started libpod-conmon-0f55946eef5a01dd79d73c52883f5f826e0aa25190bc19c764145cfbb066ada1.scope.
Oct  8 05:48:03 np0005475493 podman[102881]: 2025-10-08 09:48:03.06222409 +0000 UTC m=+0.037023383 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:48:03 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:48:03 np0005475493 podman[102881]: 2025-10-08 09:48:03.219829075 +0000 UTC m=+0.194628388 container init 0f55946eef5a01dd79d73c52883f5f826e0aa25190bc19c764145cfbb066ada1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_dirac, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:48:03 np0005475493 podman[102881]: 2025-10-08 09:48:03.229647976 +0000 UTC m=+0.204447239 container start 0f55946eef5a01dd79d73c52883f5f826e0aa25190bc19c764145cfbb066ada1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_dirac, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Oct  8 05:48:03 np0005475493 gracious_dirac[102899]: 167 167
Oct  8 05:48:03 np0005475493 systemd[1]: libpod-0f55946eef5a01dd79d73c52883f5f826e0aa25190bc19c764145cfbb066ada1.scope: Deactivated successfully.
Oct  8 05:48:03 np0005475493 conmon[102899]: conmon 0f55946eef5a01dd79d7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0f55946eef5a01dd79d73c52883f5f826e0aa25190bc19c764145cfbb066ada1.scope/container/memory.events
Oct  8 05:48:03 np0005475493 podman[102881]: 2025-10-08 09:48:03.247311075 +0000 UTC m=+0.222110388 container attach 0f55946eef5a01dd79d73c52883f5f826e0aa25190bc19c764145cfbb066ada1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_dirac, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:48:03 np0005475493 podman[102881]: 2025-10-08 09:48:03.248283189 +0000 UTC m=+0.223082492 container died 0f55946eef5a01dd79d73c52883f5f826e0aa25190bc19c764145cfbb066ada1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_dirac, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct  8 05:48:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:03 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66180038f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Oct  8 05:48:03 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  8 05:48:03 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:03 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:03 np0005475493 ceph-mon[73572]: Reconfiguring osd.1 (monmap changed)...
Oct  8 05:48:03 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  8 05:48:03 np0005475493 ceph-mon[73572]: Reconfiguring daemon osd.1 on compute-0
Oct  8 05:48:03 np0005475493 systemd[1]: var-lib-containers-storage-overlay-4f2e2401d1093c138be06828d805a722fde45e5cd964d8ae2065523ac9e99033-merged.mount: Deactivated successfully.
Oct  8 05:48:03 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.e deep-scrub starts
Oct  8 05:48:03 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.e deep-scrub ok
Oct  8 05:48:03 np0005475493 podman[102881]: 2025-10-08 09:48:03.403766552 +0000 UTC m=+0.378565835 container remove 0f55946eef5a01dd79d73c52883f5f826e0aa25190bc19c764145cfbb066ada1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:48:03 np0005475493 systemd[1]: libpod-conmon-0f55946eef5a01dd79d73c52883f5f826e0aa25190bc19c764145cfbb066ada1.scope: Deactivated successfully.
Oct  8 05:48:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Oct  8 05:48:03 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Oct  8 05:48:03 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 79 pg[9.9( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=79) [2]/[1] r=0 lpr=79 pi=[54,79)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:03 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 79 pg[9.9( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=79) [2]/[1] r=0 lpr=79 pi=[54,79)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  8 05:48:03 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 79 pg[9.8( v 45'1018 (0'0,45'1018] local-lis/les=77/78 n=6 ec=54/38 lis/c=77/54 les/c/f=78/56/0 sis=79 pruub=14.804843903s) [2] async=[2] r=-1 lpr=79 pi=[54,79)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 226.336395264s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:03 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 79 pg[9.8( v 45'1018 (0'0,45'1018] local-lis/les=77/78 n=6 ec=54/38 lis/c=77/54 les/c/f=78/56/0 sis=79 pruub=14.804767609s) [2] r=-1 lpr=79 pi=[54,79)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.336395264s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:48:03 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 79 pg[9.18( v 45'1018 (0'0,45'1018] local-lis/les=77/78 n=5 ec=54/38 lis/c=77/54 les/c/f=78/56/0 sis=79 pruub=14.803595543s) [2] async=[2] r=-1 lpr=79 pi=[54,79)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 226.336410522s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:03 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 79 pg[9.19( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=79) [2]/[1] r=0 lpr=79 pi=[54,79)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:03 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 79 pg[9.18( v 45'1018 (0'0,45'1018] local-lis/les=77/78 n=5 ec=54/38 lis/c=77/54 les/c/f=78/56/0 sis=79 pruub=14.803548813s) [2] r=-1 lpr=79 pi=[54,79)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.336410522s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:48:03 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 79 pg[9.19( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=79) [2]/[1] r=0 lpr=79 pi=[54,79)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  8 05:48:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:03 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:48:03 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:48:03 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v20: 353 pgs: 2 remapped+peering, 351 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:48:03 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:03 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring node-exporter.compute-0 (unknown last config time)...
Oct  8 05:48:03 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring node-exporter.compute-0 (unknown last config time)...
Oct  8 05:48:03 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon node-exporter.compute-0 on compute-0
Oct  8 05:48:03 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon node-exporter.compute-0 on compute-0
Oct  8 05:48:04 np0005475493 systemd[1]: Stopping Ceph node-exporter.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:48:04 np0005475493 podman[103024]: 2025-10-08 09:48:04.234994613 +0000 UTC m=+0.048673609 container died 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:04 np0005475493 systemd[1]: var-lib-containers-storage-overlay-54af9510d66390823c3b362131dbb950b9145f4e5b56d1ab94c9e3f0f29ca9ac-merged.mount: Deactivated successfully.
Oct  8 05:48:04 np0005475493 podman[103024]: 2025-10-08 09:48:04.278264952 +0000 UTC m=+0.091943958 container remove 0dbea514cc83cec397480471ade0bebb407738deaf60dfa42fe4b53fba64588f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:04 np0005475493 bash[103024]: ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0
Oct  8 05:48:04 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@node-exporter.compute-0.service: Main process exited, code=exited, status=143/n/a
Oct  8 05:48:04 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Oct  8 05:48:04 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Oct  8 05:48:04 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@node-exporter.compute-0.service: Failed with result 'exit-code'.
Oct  8 05:48:04 np0005475493 systemd[1]: Stopped Ceph node-exporter.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:48:04 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@node-exporter.compute-0.service: Consumed 2.090s CPU time.
Oct  8 05:48:04 np0005475493 systemd[1]: Starting Ceph node-exporter.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:48:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Oct  8 05:48:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Oct  8 05:48:04 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Oct  8 05:48:04 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:04 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:04 np0005475493 ceph-mon[73572]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Oct  8 05:48:04 np0005475493 ceph-mon[73572]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Oct  8 05:48:04 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 80 pg[9.19( v 45'1018 (0'0,45'1018] local-lis/les=79/80 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=79) [2]/[1] async=[2] r=0 lpr=79 pi=[54,79)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:48:04 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 80 pg[9.9( v 45'1018 (0'0,45'1018] local-lis/les=79/80 n=6 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=79) [2]/[1] async=[2] r=0 lpr=79 pi=[54,79)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:48:04 np0005475493 podman[103127]: 2025-10-08 09:48:04.656272761 +0000 UTC m=+0.046693937 container create 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:04 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95dff869684ab02b35419a56871107ff724c8d375d95c3c72431a4297b3a8cef/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Oct  8 05:48:04 np0005475493 podman[103127]: 2025-10-08 09:48:04.713431105 +0000 UTC m=+0.103852281 container init 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:04 np0005475493 podman[103127]: 2025-10-08 09:48:04.717931859 +0000 UTC m=+0.108353015 container start 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:04 np0005475493 bash[103127]: 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b
Oct  8 05:48:04 np0005475493 podman[103127]: 2025-10-08 09:48:04.633590825 +0000 UTC m=+0.024012021 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.723Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.723Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.724Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.724Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.724Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=arp
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=bcache
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=bonding
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=btrfs
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=conntrack
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=cpu
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=diskstats
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=dmi
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=edac
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=entropy
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=filefd
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=filesystem
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=hwmon
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=infiniband
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=ipvs
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=loadavg
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=mdadm
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=meminfo
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=netclass
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=netdev
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=netstat
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=nfs
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=nfsd
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=nvme
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=os
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=pressure
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=rapl
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=schedstat
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=selinux
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=sockstat
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=softnet
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=stat
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=tapestats
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=textfile
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=thermal_zone
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=time
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=uname
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=vmstat
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=xfs
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.725Z caller=node_exporter.go:117 level=info collector=zfs
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.726Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0[103142]: ts=2025-10-08T09:48:04.726Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Oct  8 05:48:04 np0005475493 systemd[1]: Started Ceph node-exporter.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:48:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:48:04 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:48:04 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:04 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Oct  8 05:48:04 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Oct  8 05:48:04 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Oct  8 05:48:04 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Oct  8 05:48:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:04 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:04.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:05.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:05 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66180038f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:05 np0005475493 podman[103219]: 2025-10-08 09:48:05.289621342 +0000 UTC m=+0.043022604 volume create adf8338bf778e2a8bf2a17ac62f888750645e9e71143f0095f0534229c41927b
Oct  8 05:48:05 np0005475493 podman[103219]: 2025-10-08 09:48:05.298769145 +0000 UTC m=+0.052170407 container create 36eaad74f53ab0375a4524501bc18d21df4de75102e75385ed57269f755cc2ce (image=quay.io/prometheus/alertmanager:v0.25.0, name=agitated_taussig, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:05 np0005475493 systemd[1]: Started libpod-conmon-36eaad74f53ab0375a4524501bc18d21df4de75102e75385ed57269f755cc2ce.scope.
Oct  8 05:48:05 np0005475493 podman[103219]: 2025-10-08 09:48:05.269187152 +0000 UTC m=+0.022588444 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct  8 05:48:05 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:48:05 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e5b704ece5d2d4f4dd02d747db42804a656fea49da8f4afdf3f5483f3d1a0e/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct  8 05:48:05 np0005475493 podman[103219]: 2025-10-08 09:48:05.390515347 +0000 UTC m=+0.143916619 container init 36eaad74f53ab0375a4524501bc18d21df4de75102e75385ed57269f755cc2ce (image=quay.io/prometheus/alertmanager:v0.25.0, name=agitated_taussig, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:05 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.10 scrub starts
Oct  8 05:48:05 np0005475493 podman[103219]: 2025-10-08 09:48:05.397523895 +0000 UTC m=+0.150925157 container start 36eaad74f53ab0375a4524501bc18d21df4de75102e75385ed57269f755cc2ce (image=quay.io/prometheus/alertmanager:v0.25.0, name=agitated_taussig, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:05 np0005475493 agitated_taussig[103235]: 65534 65534
Oct  8 05:48:05 np0005475493 systemd[1]: libpod-36eaad74f53ab0375a4524501bc18d21df4de75102e75385ed57269f755cc2ce.scope: Deactivated successfully.
Oct  8 05:48:05 np0005475493 podman[103219]: 2025-10-08 09:48:05.401167307 +0000 UTC m=+0.154568589 container attach 36eaad74f53ab0375a4524501bc18d21df4de75102e75385ed57269f755cc2ce (image=quay.io/prometheus/alertmanager:v0.25.0, name=agitated_taussig, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:05 np0005475493 podman[103219]: 2025-10-08 09:48:05.401564428 +0000 UTC m=+0.154965700 container died 36eaad74f53ab0375a4524501bc18d21df4de75102e75385ed57269f755cc2ce (image=quay.io/prometheus/alertmanager:v0.25.0, name=agitated_taussig, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:05 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.10 scrub ok
Oct  8 05:48:05 np0005475493 systemd[1]: var-lib-containers-storage-overlay-f8e5b704ece5d2d4f4dd02d747db42804a656fea49da8f4afdf3f5483f3d1a0e-merged.mount: Deactivated successfully.
Oct  8 05:48:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:05 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:05 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Oct  8 05:48:05 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v22: 353 pgs: 2 peering, 2 remapped+peering, 349 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 49 B/s, 2 objects/s recovering
Oct  8 05:48:05 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:05 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:05 np0005475493 ceph-mon[73572]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Oct  8 05:48:05 np0005475493 ceph-mon[73572]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Oct  8 05:48:05 np0005475493 podman[103219]: 2025-10-08 09:48:05.634771766 +0000 UTC m=+0.388173028 container remove 36eaad74f53ab0375a4524501bc18d21df4de75102e75385ed57269f755cc2ce (image=quay.io/prometheus/alertmanager:v0.25.0, name=agitated_taussig, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:05 np0005475493 podman[103219]: 2025-10-08 09:48:05.648437394 +0000 UTC m=+0.401838656 volume remove adf8338bf778e2a8bf2a17ac62f888750645e9e71143f0095f0534229c41927b
Oct  8 05:48:05 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Oct  8 05:48:05 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Oct  8 05:48:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:48:05] "GET /metrics HTTP/1.1" 200 48278 "" "Prometheus/2.51.0"
Oct  8 05:48:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:48:05] "GET /metrics HTTP/1.1" 200 48278 "" "Prometheus/2.51.0"
Oct  8 05:48:05 np0005475493 podman[103251]: 2025-10-08 09:48:05.755511976 +0000 UTC m=+0.091135898 volume create a2a0a9d8a3f4496418a0bb6851aef58b5aa2d7d827aa03106d2e326dfe9b006d
Oct  8 05:48:05 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 81 pg[9.19( v 45'1018 (0'0,45'1018] local-lis/les=79/80 n=5 ec=54/38 lis/c=79/54 les/c/f=80/56/0 sis=81 pruub=14.740247726s) [2] async=[2] r=-1 lpr=81 pi=[54,81)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 228.572326660s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:05 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 81 pg[9.19( v 45'1018 (0'0,45'1018] local-lis/les=79/80 n=5 ec=54/38 lis/c=79/54 les/c/f=80/56/0 sis=81 pruub=14.739789963s) [2] r=-1 lpr=81 pi=[54,81)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 228.572326660s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:48:05 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 81 pg[9.9( v 45'1018 (0'0,45'1018] local-lis/les=79/80 n=6 ec=54/38 lis/c=79/54 les/c/f=80/56/0 sis=81 pruub=14.739488602s) [2] async=[2] r=-1 lpr=81 pi=[54,81)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 228.572357178s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:05 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 81 pg[9.9( v 45'1018 (0'0,45'1018] local-lis/les=79/80 n=6 ec=54/38 lis/c=79/54 les/c/f=80/56/0 sis=81 pruub=14.739418030s) [2] r=-1 lpr=81 pi=[54,81)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 228.572357178s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:48:05 np0005475493 podman[103251]: 2025-10-08 09:48:05.683685909 +0000 UTC m=+0.019309811 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct  8 05:48:05 np0005475493 podman[103251]: 2025-10-08 09:48:05.787268813 +0000 UTC m=+0.122892735 container create 57cd7d8d326b1f9e36dd1f190536bf2c96f23a4f9f3c9c25505de6fd35d06f06 (image=quay.io/prometheus/alertmanager:v0.25.0, name=festive_joliot, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:05 np0005475493 systemd[1]: Started libpod-conmon-57cd7d8d326b1f9e36dd1f190536bf2c96f23a4f9f3c9c25505de6fd35d06f06.scope.
Oct  8 05:48:05 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:48:05 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b644418705b553205884c22cb0a94d35ba92b050be5cd432f98bcc2cd210ee72/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct  8 05:48:05 np0005475493 systemd[1]: libpod-conmon-36eaad74f53ab0375a4524501bc18d21df4de75102e75385ed57269f755cc2ce.scope: Deactivated successfully.
Oct  8 05:48:05 np0005475493 podman[103251]: 2025-10-08 09:48:05.917166345 +0000 UTC m=+0.252790257 container init 57cd7d8d326b1f9e36dd1f190536bf2c96f23a4f9f3c9c25505de6fd35d06f06 (image=quay.io/prometheus/alertmanager:v0.25.0, name=festive_joliot, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:05 np0005475493 podman[103251]: 2025-10-08 09:48:05.923388034 +0000 UTC m=+0.259011916 container start 57cd7d8d326b1f9e36dd1f190536bf2c96f23a4f9f3c9c25505de6fd35d06f06 (image=quay.io/prometheus/alertmanager:v0.25.0, name=festive_joliot, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:05 np0005475493 festive_joliot[103269]: 65534 65534
Oct  8 05:48:05 np0005475493 systemd[1]: libpod-57cd7d8d326b1f9e36dd1f190536bf2c96f23a4f9f3c9c25505de6fd35d06f06.scope: Deactivated successfully.
Oct  8 05:48:05 np0005475493 conmon[103269]: conmon 57cd7d8d326b1f9e36dd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-57cd7d8d326b1f9e36dd1f190536bf2c96f23a4f9f3c9c25505de6fd35d06f06.scope/container/memory.events
Oct  8 05:48:05 np0005475493 podman[103251]: 2025-10-08 09:48:05.963512923 +0000 UTC m=+0.299136805 container attach 57cd7d8d326b1f9e36dd1f190536bf2c96f23a4f9f3c9c25505de6fd35d06f06 (image=quay.io/prometheus/alertmanager:v0.25.0, name=festive_joliot, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:05 np0005475493 podman[103251]: 2025-10-08 09:48:05.964633852 +0000 UTC m=+0.300257734 container died 57cd7d8d326b1f9e36dd1f190536bf2c96f23a4f9f3c9c25505de6fd35d06f06 (image=quay.io/prometheus/alertmanager:v0.25.0, name=festive_joliot, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:06 np0005475493 systemd[1]: var-lib-containers-storage-overlay-b644418705b553205884c22cb0a94d35ba92b050be5cd432f98bcc2cd210ee72-merged.mount: Deactivated successfully.
Oct  8 05:48:06 np0005475493 podman[103251]: 2025-10-08 09:48:06.176459207 +0000 UTC m=+0.512083089 container remove 57cd7d8d326b1f9e36dd1f190536bf2c96f23a4f9f3c9c25505de6fd35d06f06 (image=quay.io/prometheus/alertmanager:v0.25.0, name=festive_joliot, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:06 np0005475493 podman[103251]: 2025-10-08 09:48:06.188935284 +0000 UTC m=+0.524559166 volume remove a2a0a9d8a3f4496418a0bb6851aef58b5aa2d7d827aa03106d2e326dfe9b006d
Oct  8 05:48:06 np0005475493 systemd[1]: libpod-conmon-57cd7d8d326b1f9e36dd1f190536bf2c96f23a4f9f3c9c25505de6fd35d06f06.scope: Deactivated successfully.
Oct  8 05:48:06 np0005475493 systemd[1]: Stopping Ceph alertmanager.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:48:06 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.6 scrub starts
Oct  8 05:48:06 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 12.6 scrub ok
Oct  8 05:48:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[97382]: ts=2025-10-08T09:48:06.403Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Oct  8 05:48:06 np0005475493 podman[103319]: 2025-10-08 09:48:06.424270017 +0000 UTC m=+0.061698290 container died 8d342ba820880b4150da0520ca2c2bd15e2dae173214d906ae46d290c18c1a1e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:06 np0005475493 systemd[1]: var-lib-containers-storage-overlay-56ce96f5b36afca03959d3dd28785acc44bc98ac7848532a544c80c3ee2cbbf3-merged.mount: Deactivated successfully.
Oct  8 05:48:06 np0005475493 podman[103319]: 2025-10-08 09:48:06.551553262 +0000 UTC m=+0.188981535 container remove 8d342ba820880b4150da0520ca2c2bd15e2dae173214d906ae46d290c18c1a1e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:06 np0005475493 podman[103319]: 2025-10-08 09:48:06.565814934 +0000 UTC m=+0.203243217 volume remove 00310bf376a0b175ca8d85fb11d168f2f95f64f3756abaadb6e57846efdbc0ea
Oct  8 05:48:06 np0005475493 bash[103319]: ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0
Oct  8 05:48:06 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Oct  8 05:48:06 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@alertmanager.compute-0.service: Deactivated successfully.
Oct  8 05:48:06 np0005475493 systemd[1]: Stopped Ceph alertmanager.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:48:06 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@alertmanager.compute-0.service: Consumed 1.001s CPU time.
Oct  8 05:48:06 np0005475493 systemd[1]: Starting Ceph alertmanager.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:48:06 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Oct  8 05:48:06 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Oct  8 05:48:06 np0005475493 podman[103424]: 2025-10-08 09:48:06.900861002 +0000 UTC m=+0.063785883 volume create 4bbbf489bb89a0d856f47e13e48dca9902149cd60d9cee6aaa7ca7a294835ad4
Oct  8 05:48:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:06 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:06 np0005475493 podman[103424]: 2025-10-08 09:48:06.943251199 +0000 UTC m=+0.106176090 container create feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:06 np0005475493 podman[103424]: 2025-10-08 09:48:06.864767685 +0000 UTC m=+0.027692626 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct  8 05:48:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:06.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:07 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5364c9169be9e454626d1a65d154e138f0d7667590bffb08425ce0bdca000223/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct  8 05:48:07 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5364c9169be9e454626d1a65d154e138f0d7667590bffb08425ce0bdca000223/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct  8 05:48:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct  8 05:48:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:07.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  8 05:48:07 np0005475493 podman[103424]: 2025-10-08 09:48:07.108301956 +0000 UTC m=+0.271226867 container init feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:07 np0005475493 podman[103424]: 2025-10-08 09:48:07.113552189 +0000 UTC m=+0.276477080 container start feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:07 np0005475493 bash[103424]: feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75
Oct  8 05:48:07 np0005475493 systemd[1]: Started Ceph alertmanager.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:48:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:48:07.145Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Oct  8 05:48:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:48:07.145Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Oct  8 05:48:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:48:07.152Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Oct  8 05:48:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:48:07.154Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Oct  8 05:48:07 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:48:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:48:07.191Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Oct  8 05:48:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:48:07.192Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Oct  8 05:48:07 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:07 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:48:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:48:07.196Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Oct  8 05:48:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:48:07.196Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Oct  8 05:48:07 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:07 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Oct  8 05:48:07 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Oct  8 05:48:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:07 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:07 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Oct  8 05:48:07 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Oct  8 05:48:07 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.c scrub starts
Oct  8 05:48:07 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.c scrub ok
Oct  8 05:48:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:07 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66180038f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:07 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v25: 353 pgs: 2 peering, 2 remapped+peering, 349 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 53 B/s, 2 objects/s recovering
Oct  8 05:48:07 np0005475493 podman[103528]: 2025-10-08 09:48:07.82642008 +0000 UTC m=+0.047050207 container create fdd2bd8aa4df6721cd550619c3b54f13dd5a06e4f84cc02a3ea3adb2ff24a00a (image=quay.io/ceph/grafana:10.4.0, name=youthful_rhodes, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:48:07 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:07 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:07 np0005475493 podman[103528]: 2025-10-08 09:48:07.801777894 +0000 UTC m=+0.022408051 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct  8 05:48:07 np0005475493 systemd[1]: Started libpod-conmon-fdd2bd8aa4df6721cd550619c3b54f13dd5a06e4f84cc02a3ea3adb2ff24a00a.scope.
Oct  8 05:48:07 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:48:07 np0005475493 podman[103528]: 2025-10-08 09:48:07.987585667 +0000 UTC m=+0.208215794 container init fdd2bd8aa4df6721cd550619c3b54f13dd5a06e4f84cc02a3ea3adb2ff24a00a (image=quay.io/ceph/grafana:10.4.0, name=youthful_rhodes, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:48:07 np0005475493 podman[103528]: 2025-10-08 09:48:07.995191781 +0000 UTC m=+0.215821948 container start fdd2bd8aa4df6721cd550619c3b54f13dd5a06e4f84cc02a3ea3adb2ff24a00a (image=quay.io/ceph/grafana:10.4.0, name=youthful_rhodes, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:48:07 np0005475493 youthful_rhodes[103545]: 472 0
Oct  8 05:48:07 np0005475493 systemd[1]: libpod-fdd2bd8aa4df6721cd550619c3b54f13dd5a06e4f84cc02a3ea3adb2ff24a00a.scope: Deactivated successfully.
Oct  8 05:48:07 np0005475493 conmon[103545]: conmon fdd2bd8aa4df6721cd55 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fdd2bd8aa4df6721cd550619c3b54f13dd5a06e4f84cc02a3ea3adb2ff24a00a.scope/container/memory.events
Oct  8 05:48:08 np0005475493 podman[103528]: 2025-10-08 09:48:08.020997667 +0000 UTC m=+0.241627794 container attach fdd2bd8aa4df6721cd550619c3b54f13dd5a06e4f84cc02a3ea3adb2ff24a00a (image=quay.io/ceph/grafana:10.4.0, name=youthful_rhodes, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:48:08 np0005475493 podman[103528]: 2025-10-08 09:48:08.021609962 +0000 UTC m=+0.242240079 container died fdd2bd8aa4df6721cd550619c3b54f13dd5a06e4f84cc02a3ea3adb2ff24a00a (image=quay.io/ceph/grafana:10.4.0, name=youthful_rhodes, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:48:08 np0005475493 systemd[1]: var-lib-containers-storage-overlay-78727395e0c92f3747d7112027ec7fcd8c18678d220f2c33ae783deccacfab56-merged.mount: Deactivated successfully.
Oct  8 05:48:08 np0005475493 podman[103528]: 2025-10-08 09:48:08.103547255 +0000 UTC m=+0.324177382 container remove fdd2bd8aa4df6721cd550619c3b54f13dd5a06e4f84cc02a3ea3adb2ff24a00a (image=quay.io/ceph/grafana:10.4.0, name=youthful_rhodes, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:48:08 np0005475493 systemd[1]: libpod-conmon-fdd2bd8aa4df6721cd550619c3b54f13dd5a06e4f84cc02a3ea3adb2ff24a00a.scope: Deactivated successfully.
Oct  8 05:48:08 np0005475493 podman[103563]: 2025-10-08 09:48:08.160647877 +0000 UTC m=+0.039358902 container create d9d7995ff5eb2aac8cf415720b6cbb08640e8a85a7fa26ecd9566d4e7609bb03 (image=quay.io/ceph/grafana:10.4.0, name=gifted_meninsky, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:48:08 np0005475493 systemd[1]: Started libpod-conmon-d9d7995ff5eb2aac8cf415720b6cbb08640e8a85a7fa26ecd9566d4e7609bb03.scope.
Oct  8 05:48:08 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:48:08 np0005475493 podman[103563]: 2025-10-08 09:48:08.144038925 +0000 UTC m=+0.022749980 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct  8 05:48:08 np0005475493 podman[103563]: 2025-10-08 09:48:08.244136959 +0000 UTC m=+0.122848014 container init d9d7995ff5eb2aac8cf415720b6cbb08640e8a85a7fa26ecd9566d4e7609bb03 (image=quay.io/ceph/grafana:10.4.0, name=gifted_meninsky, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:48:08 np0005475493 podman[103563]: 2025-10-08 09:48:08.248776927 +0000 UTC m=+0.127487962 container start d9d7995ff5eb2aac8cf415720b6cbb08640e8a85a7fa26ecd9566d4e7609bb03 (image=quay.io/ceph/grafana:10.4.0, name=gifted_meninsky, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:48:08 np0005475493 gifted_meninsky[103579]: 472 0
Oct  8 05:48:08 np0005475493 systemd[1]: libpod-d9d7995ff5eb2aac8cf415720b6cbb08640e8a85a7fa26ecd9566d4e7609bb03.scope: Deactivated successfully.
Oct  8 05:48:08 np0005475493 podman[103563]: 2025-10-08 09:48:08.25479255 +0000 UTC m=+0.133503585 container attach d9d7995ff5eb2aac8cf415720b6cbb08640e8a85a7fa26ecd9566d4e7609bb03 (image=quay.io/ceph/grafana:10.4.0, name=gifted_meninsky, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:48:08 np0005475493 podman[103563]: 2025-10-08 09:48:08.255073757 +0000 UTC m=+0.133784792 container died d9d7995ff5eb2aac8cf415720b6cbb08640e8a85a7fa26ecd9566d4e7609bb03 (image=quay.io/ceph/grafana:10.4.0, name=gifted_meninsky, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:48:08 np0005475493 systemd[1]: var-lib-containers-storage-overlay-8c00bf8b9679365c5003ecbde959dfe07a26c32c38d769c5aa65b2df66692bb8-merged.mount: Deactivated successfully.
Oct  8 05:48:08 np0005475493 podman[103563]: 2025-10-08 09:48:08.304318629 +0000 UTC m=+0.183029664 container remove d9d7995ff5eb2aac8cf415720b6cbb08640e8a85a7fa26ecd9566d4e7609bb03 (image=quay.io/ceph/grafana:10.4.0, name=gifted_meninsky, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:48:08 np0005475493 systemd[1]: libpod-conmon-d9d7995ff5eb2aac8cf415720b6cbb08640e8a85a7fa26ecd9566d4e7609bb03.scope: Deactivated successfully.
Oct  8 05:48:08 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.a scrub starts
Oct  8 05:48:08 np0005475493 systemd[1]: Stopping Ceph grafana.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:48:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:48:08 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.a scrub ok
Oct  8 05:48:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=server t=2025-10-08T09:48:08.540284677Z level=info msg="Shutdown started" reason="System signal: terminated"
Oct  8 05:48:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=ticker t=2025-10-08T09:48:08.54038647Z level=info msg=stopped last_tick=2025-10-08T09:48:00Z
Oct  8 05:48:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=grafana-apiserver t=2025-10-08T09:48:08.540631576Z level=info msg="StorageObjectCountTracker pruner is exiting"
Oct  8 05:48:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=tracing t=2025-10-08T09:48:08.540701078Z level=info msg="Closing tracing"
Oct  8 05:48:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[98036]: logger=sqlstore.transactions t=2025-10-08T09:48:08.55254803Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Oct  8 05:48:08 np0005475493 podman[103625]: 2025-10-08 09:48:08.571015089 +0000 UTC m=+0.080691502 container died 56b2a7b6ecfb4c45a674b20b3f1d60f53e5d4fac0af3b6685d98e6cab4ce59f5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:48:08 np0005475493 systemd[1]: var-lib-containers-storage-overlay-299d1132a49e90b1d598865e6a36f1a7dd2aea77757b20cf4893ea1efcfcb275-merged.mount: Deactivated successfully.
Oct  8 05:48:08 np0005475493 podman[103625]: 2025-10-08 09:48:08.723485405 +0000 UTC m=+0.233161818 container remove 56b2a7b6ecfb4c45a674b20b3f1d60f53e5d4fac0af3b6685d98e6cab4ce59f5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:48:08 np0005475493 bash[103625]: ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0
Oct  8 05:48:08 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@grafana.compute-0.service: Deactivated successfully.
Oct  8 05:48:08 np0005475493 systemd[1]: Stopped Ceph grafana.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:48:08 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@grafana.compute-0.service: Consumed 4.168s CPU time.
Oct  8 05:48:08 np0005475493 systemd[1]: Starting Ceph grafana.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:48:08 np0005475493 ceph-mon[73572]: Reconfiguring grafana.compute-0 (dependencies changed)...
Oct  8 05:48:08 np0005475493 ceph-mon[73572]: Reconfiguring daemon grafana.compute-0 on compute-0
Oct  8 05:48:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:08 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:08.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:09 np0005475493 podman[103722]: 2025-10-08 09:48:09.028329524 +0000 UTC m=+0.041446015 container create 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:48:09 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa13ae157c185f497141d0b5b68c767226f216566a5a7abb57a48ff9ac4fad8/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:48:09 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa13ae157c185f497141d0b5b68c767226f216566a5a7abb57a48ff9ac4fad8/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Oct  8 05:48:09 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa13ae157c185f497141d0b5b68c767226f216566a5a7abb57a48ff9ac4fad8/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Oct  8 05:48:09 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa13ae157c185f497141d0b5b68c767226f216566a5a7abb57a48ff9ac4fad8/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Oct  8 05:48:09 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa13ae157c185f497141d0b5b68c767226f216566a5a7abb57a48ff9ac4fad8/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Oct  8 05:48:09 np0005475493 podman[103722]: 2025-10-08 09:48:09.080205933 +0000 UTC m=+0.093322454 container init 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:48:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:09.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:09 np0005475493 podman[103722]: 2025-10-08 09:48:09.087863877 +0000 UTC m=+0.100980358 container start 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:48:09 np0005475493 bash[103722]: 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd
Oct  8 05:48:09 np0005475493 podman[103722]: 2025-10-08 09:48:09.008655874 +0000 UTC m=+0.021772385 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct  8 05:48:09 np0005475493 systemd[1]: Started Ceph grafana.compute-0 for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:48:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:48:09.154Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000175636s
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240225061Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-10-08T09:48:09Z
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240519408Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240531859Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240536139Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240539849Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240543649Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240548189Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240553129Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240559239Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240563559Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.24059578Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.2406031Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.2406071Z level=info msg=Target target=[all]
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240616051Z level=info msg="Path Home" path=/usr/share/grafana
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240620751Z level=info msg="Path Data" path=/var/lib/grafana
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240625011Z level=info msg="Path Logs" path=/var/log/grafana
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240630091Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240633951Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=settings t=2025-10-08T09:48:09.240637651Z level=info msg="App mode production"
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=sqlstore t=2025-10-08T09:48:09.240951689Z level=info msg="Connecting to DB" dbtype=sqlite3
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=sqlstore t=2025-10-08T09:48:09.24096921Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=migrator t=2025-10-08T09:48:09.241666367Z level=info msg="Starting DB migrations"
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=migrator t=2025-10-08T09:48:09.262017984Z level=info msg="migrations completed" performed=0 skipped=547 duration=585.895µs
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=sqlstore t=2025-10-08T09:48:09.26300259Z level=info msg="Created default organization"
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:09 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=secrets t=2025-10-08T09:48:09.263556724Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=plugin.store t=2025-10-08T09:48:09.283725356Z level=info msg="Loading plugins..."
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=local.finder t=2025-10-08T09:48:09.375415598Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=plugin.store t=2025-10-08T09:48:09.375447288Z level=info msg="Plugins loaded" count=55 duration=91.723102ms
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=query_data t=2025-10-08T09:48:09.378442605Z level=info msg="Query Service initialization"
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=live.push_http t=2025-10-08T09:48:09.381347488Z level=info msg="Live Push Gateway initialization"
Oct  8 05:48:09 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.0 deep-scrub starts
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=ngalert.migration t=2025-10-08T09:48:09.38414702Z level=info msg=Starting
Oct  8 05:48:09 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=ngalert.state.manager t=2025-10-08T09:48:09.401784697Z level=info msg="Running in alternative execution of Error/NoData mode"
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=infra.usagestats.collector t=2025-10-08T09:48:09.403881371Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=provisioning.datasources t=2025-10-08T09:48:09.406455357Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Oct  8 05:48:09 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.0 deep-scrub ok
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=provisioning.alerting t=2025-10-08T09:48:09.434540301Z level=info msg="starting to provision alerting"
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=provisioning.alerting t=2025-10-08T09:48:09.434575012Z level=info msg="finished to provision alerting"
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=ngalert.state.manager t=2025-10-08T09:48:09.434842157Z level=info msg="Warming state cache for startup"
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=ngalert.multiorg.alertmanager t=2025-10-08T09:48:09.435222737Z level=info msg="Starting MultiOrg Alertmanager"
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=grafanaStorageLogger t=2025-10-08T09:48:09.435647308Z level=info msg="Storage starting"
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=http.server t=2025-10-08T09:48:09.438285355Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=http.server t=2025-10-08T09:48:09.438632344Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Oct  8 05:48:09 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:09 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Oct  8 05:48:09 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Oct  8 05:48:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct  8 05:48:09 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  8 05:48:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:48:09 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:48:09 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Oct  8 05:48:09 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=provisioning.dashboard t=2025-10-08T09:48:09.47465635Z level=info msg="starting to provision dashboards"
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=ngalert.state.manager t=2025-10-08T09:48:09.489635411Z level=info msg="State cache has been initialized" states=0 duration=54.784004ms
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=ngalert.scheduler t=2025-10-08T09:48:09.489694782Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=ticker t=2025-10-08T09:48:09.489782125Z level=info msg=starting first_tick=2025-10-08T09:48:10Z
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:09 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66180038f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=provisioning.dashboard t=2025-10-08T09:48:09.497518561Z level=info msg="finished to provision dashboards"
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=plugins.update.checker t=2025-10-08T09:48:09.51164422Z level=info msg="Update check succeeded" duration=76.076824ms
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=grafana.update.checker t=2025-10-08T09:48:09.511882036Z level=info msg="Update check succeeded" duration=75.661673ms
Oct  8 05:48:09 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v26: 353 pgs: 353 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 28 B/s, 0 objects/s recovering
Oct  8 05:48:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Oct  8 05:48:09 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=grafana-apiserver t=2025-10-08T09:48:09.670318004Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Oct  8 05:48:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=grafana-apiserver t=2025-10-08T09:48:09.670717074Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Oct  8 05:48:10 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:10 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:10 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  8 05:48:10 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  8 05:48:10 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Oct  8 05:48:10 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Oct  8 05:48:10 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Oct  8 05:48:10 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  8 05:48:10 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Oct  8 05:48:10 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Oct  8 05:48:10 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 83 pg[9.a( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=9 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=83 pruub=11.736685753s) [0] r=-1 lpr=83 pi=[54,83)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 230.315979004s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:10 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 83 pg[9.a( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=9 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=83 pruub=11.736638069s) [0] r=-1 lpr=83 pi=[54,83)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.315979004s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:48:10 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 83 pg[9.1a( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=83 pruub=11.738619804s) [0] r=-1 lpr=83 pi=[54,83)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 230.318161011s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:10 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 83 pg[9.1a( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=83 pruub=11.738594055s) [0] r=-1 lpr=83 pi=[54,83)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.318161011s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:48:10 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:48:10 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:10 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:48:10 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:10 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Oct  8 05:48:10 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Oct  8 05:48:10 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Oct  8 05:48:10 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  8 05:48:10 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:48:10 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:48:10 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-1
Oct  8 05:48:10 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-1
Oct  8 05:48:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:10 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:10.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:11 np0005475493 ceph-mon[73572]: Reconfiguring crash.compute-1 (monmap changed)...
Oct  8 05:48:11 np0005475493 ceph-mon[73572]: Reconfiguring daemon crash.compute-1 on compute-1
Oct  8 05:48:11 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  8 05:48:11 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:11 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:11 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  8 05:48:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct  8 05:48:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:11.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  8 05:48:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:11 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:11 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Oct  8 05:48:11 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Oct  8 05:48:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:11 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f80012a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:11 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Oct  8 05:48:11 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Oct  8 05:48:11 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Oct  8 05:48:11 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 84 pg[9.a( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=9 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=84) [0]/[1] r=0 lpr=84 pi=[54,84)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:11 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 84 pg[9.a( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=9 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=84) [0]/[1] r=0 lpr=84 pi=[54,84)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  8 05:48:11 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 84 pg[9.1a( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=84) [0]/[1] r=0 lpr=84 pi=[54,84)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:11 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 84 pg[9.1a( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=84) [0]/[1] r=0 lpr=84 pi=[54,84)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  8 05:48:11 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:48:11 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v29: 353 pgs: 353 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:48:11 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Oct  8 05:48:11 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  8 05:48:11 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:11 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:48:11 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:11 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Oct  8 05:48:11 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Oct  8 05:48:11 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct  8 05:48:11 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  8 05:48:11 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct  8 05:48:11 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct  8 05:48:11 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:48:11 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:48:11 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Oct  8 05:48:11 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Oct  8 05:48:12 np0005475493 ceph-mon[73572]: Reconfiguring osd.0 (monmap changed)...
Oct  8 05:48:12 np0005475493 ceph-mon[73572]: Reconfiguring daemon osd.0 on compute-1
Oct  8 05:48:12 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  8 05:48:12 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:12 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:12 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  8 05:48:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:48:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:48:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:12 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring node-exporter.compute-1 (unknown last config time)...
Oct  8 05:48:12 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring node-exporter.compute-1 (unknown last config time)...
Oct  8 05:48:12 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon node-exporter.compute-1 on compute-1
Oct  8 05:48:12 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon node-exporter.compute-1 on compute-1
Oct  8 05:48:12 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Oct  8 05:48:12 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Oct  8 05:48:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Oct  8 05:48:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  8 05:48:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Oct  8 05:48:12 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Oct  8 05:48:12 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 85 pg[9.1a( v 45'1018 (0'0,45'1018] local-lis/les=84/85 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=84) [0]/[1] async=[0] r=0 lpr=84 pi=[54,84)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:48:12 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 85 pg[9.a( v 45'1018 (0'0,45'1018] local-lis/les=84/85 n=9 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=84) [0]/[1] async=[0] r=0 lpr=84 pi=[54,84)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:48:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:12 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66180038f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:12.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:13.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:13 np0005475493 ceph-mon[73572]: Reconfiguring mon.compute-1 (monmap changed)...
Oct  8 05:48:13 np0005475493 ceph-mon[73572]: Reconfiguring daemon mon.compute-1 on compute-1
Oct  8 05:48:13 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:13 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:13 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  8 05:48:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:13 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:48:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Oct  8 05:48:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Oct  8 05:48:13 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Oct  8 05:48:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:48:13 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 86 pg[9.1a( v 45'1018 (0'0,45'1018] local-lis/les=84/85 n=5 ec=54/38 lis/c=84/54 les/c/f=85/56/0 sis=86 pruub=15.144413948s) [0] async=[0] r=-1 lpr=86 pi=[54,86)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 236.623565674s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:13 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 86 pg[9.1a( v 45'1018 (0'0,45'1018] local-lis/les=84/85 n=5 ec=54/38 lis/c=84/54 les/c/f=85/56/0 sis=86 pruub=15.144309998s) [0] r=-1 lpr=86 pi=[54,86)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 236.623565674s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:48:13 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 86 pg[9.a( v 45'1018 (0'0,45'1018] local-lis/les=84/85 n=9 ec=54/38 lis/c=84/54 les/c/f=85/56/0 sis=86 pruub=15.144071579s) [0] async=[0] r=-1 lpr=86 pi=[54,86)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 236.623596191s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:13 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 86 pg[9.a( v 45'1018 (0'0,45'1018] local-lis/les=84/85 n=9 ec=54/38 lis/c=84/54 les/c/f=85/56/0 sis=86 pruub=15.143853188s) [0] r=-1 lpr=86 pi=[54,86)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 236.623596191s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:48:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:48:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:13 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Oct  8 05:48:13 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Oct  8 05:48:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct  8 05:48:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  8 05:48:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct  8 05:48:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct  8 05:48:13 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Oct  8 05:48:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:48:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:48:13 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Oct  8 05:48:13 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Oct  8 05:48:13 np0005475493 ceph-osd[81751]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Oct  8 05:48:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:13 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:13 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v32: 353 pgs: 2 peering, 351 active+clean; 455 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Oct  8 05:48:14 np0005475493 systemd-logind[798]: New session 40 of user zuul.
Oct  8 05:48:14 np0005475493 systemd[1]: Started Session 40 of User zuul.
Oct  8 05:48:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:48:14 np0005475493 ceph-mon[73572]: Reconfiguring node-exporter.compute-1 (unknown last config time)...
Oct  8 05:48:14 np0005475493 ceph-mon[73572]: Reconfiguring daemon node-exporter.compute-1 on compute-1
Oct  8 05:48:14 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:14 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:14 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  8 05:48:14 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:48:14 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:14 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.mtagwx (monmap changed)...
Oct  8 05:48:14 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.mtagwx (monmap changed)...
Oct  8 05:48:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.mtagwx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct  8 05:48:14 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.mtagwx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  8 05:48:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  8 05:48:14 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  8 05:48:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:48:14 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:48:14 np0005475493 ceph-mgr[73869]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.mtagwx on compute-2
Oct  8 05:48:14 np0005475493 ceph-mgr[73869]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.mtagwx on compute-2
Oct  8 05:48:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Oct  8 05:48:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Oct  8 05:48:14 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Oct  8 05:48:14 np0005475493 python3.9[103925]: ansible-ansible.legacy.ping Invoked with data=pong
Oct  8 05:48:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:14 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8002180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:14.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:15.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:15 np0005475493 ceph-mon[73572]: Reconfiguring mon.compute-2 (monmap changed)...
Oct  8 05:48:15 np0005475493 ceph-mon[73572]: Reconfiguring daemon mon.compute-2 on compute-2
Oct  8 05:48:15 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:15 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:15 np0005475493 ceph-mon[73572]: Reconfiguring mgr.compute-2.mtagwx (monmap changed)...
Oct  8 05:48:15 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.mtagwx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  8 05:48:15 np0005475493 ceph-mon[73572]: Reconfiguring daemon mgr.compute-2.mtagwx on compute-2
Oct  8 05:48:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:15 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66180038f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:15 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:48:15 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v34: 353 pgs: 2 peering, 351 active+clean; 455 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 0 objects/s recovering
Oct  8 05:48:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:48:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:48:15] "GET /metrics HTTP/1.1" 200 48278 "" "Prometheus/2.51.0"
Oct  8 05:48:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:48:15] "GET /metrics HTTP/1.1" 200 48278 "" "Prometheus/2.51.0"
Oct  8 05:48:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Oct  8 05:48:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Oct  8 05:48:15 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Oct  8 05:48:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Oct  8 05:48:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Oct  8 05:48:15 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Oct  8 05:48:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Oct  8 05:48:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct  8 05:48:15 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct  8 05:48:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Oct  8 05:48:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:15 np0005475493 ceph-mgr[73869]: [prometheus INFO root] Restarting engine...
Oct  8 05:48:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.error] [08/Oct/2025:09:48:15] ENGINE Bus STOPPING
Oct  8 05:48:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: [08/Oct/2025:09:48:15] ENGINE Bus STOPPING
Oct  8 05:48:16 np0005475493 python3.9[104100]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:48:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: [08/Oct/2025:09:48:16] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Oct  8 05:48:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: [08/Oct/2025:09:48:16] ENGINE Bus STOPPED
Oct  8 05:48:16 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.error] [08/Oct/2025:09:48:16] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Oct  8 05:48:16 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.error] [08/Oct/2025:09:48:16] ENGINE Bus STOPPED
Oct  8 05:48:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: [08/Oct/2025:09:48:16] ENGINE Bus STARTING
Oct  8 05:48:16 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.error] [08/Oct/2025:09:48:16] ENGINE Bus STARTING
Oct  8 05:48:16 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:16 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:16 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct  8 05:48:16 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: [08/Oct/2025:09:48:16] ENGINE Serving on http://:::9283
Oct  8 05:48:16 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.error] [08/Oct/2025:09:48:16] ENGINE Serving on http://:::9283
Oct  8 05:48:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: [08/Oct/2025:09:48:16] ENGINE Bus STARTED
Oct  8 05:48:16 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.error] [08/Oct/2025:09:48:16] ENGINE Bus STARTED
Oct  8 05:48:16 np0005475493 ceph-mgr[73869]: [prometheus INFO root] Engine started.
Oct  8 05:48:16 np0005475493 podman[104264]: 2025-10-08 09:48:16.636782009 +0000 UTC m=+0.109298220 container exec 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1)
Oct  8 05:48:16 np0005475493 podman[104264]: 2025-10-08 09:48:16.73048324 +0000 UTC m=+0.202999431 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct  8 05:48:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:16 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:16.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:17.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:48:17.156Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.001769584s
Oct  8 05:48:17 np0005475493 python3.9[104486]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:48:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:17 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8002180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:17 np0005475493 podman[104531]: 2025-10-08 09:48:17.336770893 +0000 UTC m=+0.101264675 container exec 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:17 np0005475493 podman[104580]: 2025-10-08 09:48:17.401285653 +0000 UTC m=+0.049878999 container exec_died 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:17 np0005475493 podman[104531]: 2025-10-08 09:48:17.426883704 +0000 UTC m=+0.191377506 container exec_died 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:17 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618003a90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:17 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v35: 353 pgs: 2 peering, 351 active+clean; 455 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Oct  8 05:48:17 np0005475493 podman[104629]: 2025-10-08 09:48:17.69423122 +0000 UTC m=+0.075095990 container exec c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:48:17 np0005475493 podman[104674]: 2025-10-08 09:48:17.763176573 +0000 UTC m=+0.052777784 container exec_died c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  8 05:48:17 np0005475493 podman[104629]: 2025-10-08 09:48:17.803698763 +0000 UTC m=+0.184563543 container exec_died c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:48:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:48:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:48:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:48:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:48:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:48:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:48:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:48:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:48:18 np0005475493 podman[104748]: 2025-10-08 09:48:18.188574227 +0000 UTC m=+0.122138756 container exec 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct  8 05:48:18 np0005475493 podman[104780]: 2025-10-08 09:48:18.255917039 +0000 UTC m=+0.050407332 container exec_died 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct  8 05:48:18 np0005475493 podman[104748]: 2025-10-08 09:48:18.30240977 +0000 UTC m=+0.235974269 container exec_died 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct  8 05:48:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:48:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:48:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:48:18 np0005475493 python3.9[104863]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:48:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:18 np0005475493 podman[104890]: 2025-10-08 09:48:18.673655907 +0000 UTC m=+0.181436593 container exec 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, distribution-scope=public, io.openshift.tags=Ceph keepalived, architecture=x86_64, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, name=keepalived, io.openshift.expose-services=, io.buildah.version=1.28.2)
Oct  8 05:48:18 np0005475493 podman[104934]: 2025-10-08 09:48:18.786186038 +0000 UTC m=+0.084986461 container exec_died 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, architecture=x86_64, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, vcs-type=git, vendor=Red Hat, Inc., version=2.2.4, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, release=1793, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct  8 05:48:18 np0005475493 podman[104890]: 2025-10-08 09:48:18.795699851 +0000 UTC m=+0.303480537 container exec_died 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, distribution-scope=public, version=2.2.4, io.openshift.expose-services=, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vendor=Red Hat, Inc., description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  8 05:48:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:18 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct  8 05:48:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:18.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  8 05:48:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct  8 05:48:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:19.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  8 05:48:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:19 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:19 np0005475493 podman[105032]: 2025-10-08 09:48:19.296412409 +0000 UTC m=+0.176297332 container exec feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:19 np0005475493 podman[105135]: 2025-10-08 09:48:19.445367876 +0000 UTC m=+0.104311403 container exec_died feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:19 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8002b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:19 np0005475493 podman[105032]: 2025-10-08 09:48:19.530023957 +0000 UTC m=+0.409908850 container exec_died feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:19 np0005475493 python3.9[105147]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:48:19 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v36: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:48:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Oct  8 05:48:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  8 05:48:19 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:19 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Oct  8 05:48:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  8 05:48:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Oct  8 05:48:19 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Oct  8 05:48:20 np0005475493 podman[105257]: 2025-10-08 09:48:20.103545408 +0000 UTC m=+0.190115384 container exec 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:48:20 np0005475493 podman[105257]: 2025-10-08 09:48:20.270351808 +0000 UTC m=+0.356921774 container exec_died 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:48:20 np0005475493 python3.9[105386]: ansible-ansible.builtin.service_facts Invoked
Oct  8 05:48:20 np0005475493 network[105427]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  8 05:48:20 np0005475493 network[105428]: 'network-scripts' will be removed from distribution in near future.
Oct  8 05:48:20 np0005475493 network[105429]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  8 05:48:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/094820 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 05:48:20 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  8 05:48:20 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  8 05:48:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:20 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618003ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:20.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:21.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:21 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6618003ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:21 np0005475493 podman[105504]: 2025-10-08 09:48:21.461809415 +0000 UTC m=+0.089898745 container exec 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:21 np0005475493 podman[105504]: 2025-10-08 09:48:21.501450613 +0000 UTC m=+0.129539943 container exec_died 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:48:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:21 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:21 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v38: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:48:21 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Oct  8 05:48:21 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  8 05:48:21 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:48:21 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Oct  8 05:48:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:48:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  8 05:48:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Oct  8 05:48:22 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  8 05:48:22 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Oct  8 05:48:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:48:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:48:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 05:48:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:48:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 05:48:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 05:48:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 05:48:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 05:48:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 05:48:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:48:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:48:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:48:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:22 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:23.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:23.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:23 np0005475493 podman[105732]: 2025-10-08 09:48:23.126438522 +0000 UTC m=+0.059135444 container create 2e66e2cea46da113d40ab68e481d9c37169d918f5eef581b7327777cd2a53929 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_wright, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  8 05:48:23 np0005475493 systemd[92032]: Starting Mark boot as successful...
Oct  8 05:48:23 np0005475493 systemd[92032]: Finished Mark boot as successful.
Oct  8 05:48:23 np0005475493 podman[105732]: 2025-10-08 09:48:23.089908524 +0000 UTC m=+0.022605446 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:48:23 np0005475493 systemd[1]: Started libpod-conmon-2e66e2cea46da113d40ab68e481d9c37169d918f5eef581b7327777cd2a53929.scope.
Oct  8 05:48:23 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:48:23 np0005475493 podman[105732]: 2025-10-08 09:48:23.267294033 +0000 UTC m=+0.199990955 container init 2e66e2cea46da113d40ab68e481d9c37169d918f5eef581b7327777cd2a53929 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Oct  8 05:48:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:23 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:23 np0005475493 podman[105732]: 2025-10-08 09:48:23.27388798 +0000 UTC m=+0.206584872 container start 2e66e2cea46da113d40ab68e481d9c37169d918f5eef581b7327777cd2a53929 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_wright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  8 05:48:23 np0005475493 vibrant_wright[105774]: 167 167
Oct  8 05:48:23 np0005475493 systemd[1]: libpod-2e66e2cea46da113d40ab68e481d9c37169d918f5eef581b7327777cd2a53929.scope: Deactivated successfully.
Oct  8 05:48:23 np0005475493 podman[105732]: 2025-10-08 09:48:23.323347028 +0000 UTC m=+0.256043950 container attach 2e66e2cea46da113d40ab68e481d9c37169d918f5eef581b7327777cd2a53929 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_wright, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct  8 05:48:23 np0005475493 podman[105732]: 2025-10-08 09:48:23.324135248 +0000 UTC m=+0.256832150 container died 2e66e2cea46da113d40ab68e481d9c37169d918f5eef581b7327777cd2a53929 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_wright, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  8 05:48:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Oct  8 05:48:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:23 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:23 np0005475493 systemd[1]: var-lib-containers-storage-overlay-1490fd9de60f279ae6053b646ec82cc486bc94dc5bb0005afe0e4c3a80771161-merged.mount: Deactivated successfully.
Oct  8 05:48:23 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:23 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  8 05:48:23 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:23 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:48:23 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:23 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:23 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:48:23 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v40: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:48:23 np0005475493 podman[105732]: 2025-10-08 09:48:23.806347946 +0000 UTC m=+0.739044848 container remove 2e66e2cea46da113d40ab68e481d9c37169d918f5eef581b7327777cd2a53929 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_wright, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:48:23 np0005475493 systemd[1]: libpod-conmon-2e66e2cea46da113d40ab68e481d9c37169d918f5eef581b7327777cd2a53929.scope: Deactivated successfully.
Oct  8 05:48:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Oct  8 05:48:24 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Oct  8 05:48:24 np0005475493 podman[105858]: 2025-10-08 09:48:24.065413982 +0000 UTC m=+0.114042760 container create 1332b4f349ce4693cf98f6c6209b90219e8cc4fe193f061bed8459cd5638621e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_galileo, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct  8 05:48:24 np0005475493 podman[105858]: 2025-10-08 09:48:23.972822579 +0000 UTC m=+0.021451407 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:48:24 np0005475493 systemd[1]: Started libpod-conmon-1332b4f349ce4693cf98f6c6209b90219e8cc4fe193f061bed8459cd5638621e.scope.
Oct  8 05:48:24 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:48:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/341c9f2ddfa65e4d1445758cb22ab47fad14978d3252daa2a056952eab874831/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:48:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/341c9f2ddfa65e4d1445758cb22ab47fad14978d3252daa2a056952eab874831/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:48:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/341c9f2ddfa65e4d1445758cb22ab47fad14978d3252daa2a056952eab874831/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:48:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/341c9f2ddfa65e4d1445758cb22ab47fad14978d3252daa2a056952eab874831/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:48:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/341c9f2ddfa65e4d1445758cb22ab47fad14978d3252daa2a056952eab874831/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:48:24 np0005475493 podman[105858]: 2025-10-08 09:48:24.259434264 +0000 UTC m=+0.308063062 container init 1332b4f349ce4693cf98f6c6209b90219e8cc4fe193f061bed8459cd5638621e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_galileo, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:48:24 np0005475493 podman[105858]: 2025-10-08 09:48:24.268912765 +0000 UTC m=+0.317541563 container start 1332b4f349ce4693cf98f6c6209b90219e8cc4fe193f061bed8459cd5638621e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_galileo, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:48:24 np0005475493 podman[105858]: 2025-10-08 09:48:24.302853228 +0000 UTC m=+0.351482006 container attach 1332b4f349ce4693cf98f6c6209b90219e8cc4fe193f061bed8459cd5638621e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_galileo, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:48:24 np0005475493 python3.9[105938]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:48:24 np0005475493 sharp_galileo[105941]: --> passed data devices: 0 physical, 1 LVM
Oct  8 05:48:24 np0005475493 sharp_galileo[105941]: --> All data devices are unavailable
Oct  8 05:48:24 np0005475493 systemd[1]: libpod-1332b4f349ce4693cf98f6c6209b90219e8cc4fe193f061bed8459cd5638621e.scope: Deactivated successfully.
Oct  8 05:48:24 np0005475493 podman[105858]: 2025-10-08 09:48:24.582963609 +0000 UTC m=+0.631592427 container died 1332b4f349ce4693cf98f6c6209b90219e8cc4fe193f061bed8459cd5638621e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_galileo, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:48:24 np0005475493 systemd[1]: var-lib-containers-storage-overlay-341c9f2ddfa65e4d1445758cb22ab47fad14978d3252daa2a056952eab874831-merged.mount: Deactivated successfully.
Oct  8 05:48:24 np0005475493 podman[105858]: 2025-10-08 09:48:24.897545415 +0000 UTC m=+0.946174193 container remove 1332b4f349ce4693cf98f6c6209b90219e8cc4fe193f061bed8459cd5638621e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct  8 05:48:24 np0005475493 systemd[1]: libpod-conmon-1332b4f349ce4693cf98f6c6209b90219e8cc4fe193f061bed8459cd5638621e.scope: Deactivated successfully.
Oct  8 05:48:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:24 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8002b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Oct  8 05:48:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:25.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:25.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Oct  8 05:48:25 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Oct  8 05:48:25 np0005475493 python3.9[106117]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:48:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:25 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:25 np0005475493 podman[106216]: 2025-10-08 09:48:25.486107938 +0000 UTC m=+0.099224124 container create 61562118fe62206aa1d04e1ca3513a635be741f48453b430ade0e8cd64f0e71a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct  8 05:48:25 np0005475493 podman[106216]: 2025-10-08 09:48:25.412361062 +0000 UTC m=+0.025477278 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:48:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:25 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:25 np0005475493 systemd[1]: Started libpod-conmon-61562118fe62206aa1d04e1ca3513a635be741f48453b430ade0e8cd64f0e71a.scope.
Oct  8 05:48:25 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:48:25 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v43: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:48:25 np0005475493 podman[106216]: 2025-10-08 09:48:25.642128894 +0000 UTC m=+0.255245100 container init 61562118fe62206aa1d04e1ca3513a635be741f48453b430ade0e8cd64f0e71a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_tesla, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:48:25 np0005475493 podman[106216]: 2025-10-08 09:48:25.649257785 +0000 UTC m=+0.262374011 container start 61562118fe62206aa1d04e1ca3513a635be741f48453b430ade0e8cd64f0e71a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_tesla, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:48:25 np0005475493 inspiring_tesla[106257]: 167 167
Oct  8 05:48:25 np0005475493 systemd[1]: libpod-61562118fe62206aa1d04e1ca3513a635be741f48453b430ade0e8cd64f0e71a.scope: Deactivated successfully.
Oct  8 05:48:25 np0005475493 podman[106216]: 2025-10-08 09:48:25.695958283 +0000 UTC m=+0.309074469 container attach 61562118fe62206aa1d04e1ca3513a635be741f48453b430ade0e8cd64f0e71a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_tesla, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:48:25 np0005475493 podman[106216]: 2025-10-08 09:48:25.696879616 +0000 UTC m=+0.309995792 container died 61562118fe62206aa1d04e1ca3513a635be741f48453b430ade0e8cd64f0e71a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_tesla, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:48:25 np0005475493 systemd[1]: var-lib-containers-storage-overlay-4c0434133753b410a5505efd5e966f544aaa12c6418008eb2c977ac04e8947f7-merged.mount: Deactivated successfully.
Oct  8 05:48:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:48:25] "GET /metrics HTTP/1.1" 200 48278 "" "Prometheus/2.51.0"
Oct  8 05:48:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:48:25] "GET /metrics HTTP/1.1" 200 48278 "" "Prometheus/2.51.0"
Oct  8 05:48:25 np0005475493 podman[106216]: 2025-10-08 09:48:25.899657011 +0000 UTC m=+0.512773197 container remove 61562118fe62206aa1d04e1ca3513a635be741f48453b430ade0e8cd64f0e71a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  8 05:48:25 np0005475493 systemd[1]: libpod-conmon-61562118fe62206aa1d04e1ca3513a635be741f48453b430ade0e8cd64f0e71a.scope: Deactivated successfully.
Oct  8 05:48:26 np0005475493 podman[106296]: 2025-10-08 09:48:26.094739051 +0000 UTC m=+0.058826907 container create 4e74146b2b57a603c2010d087fd0fbded5a5e716ccf6d1d2cacd1530de4140f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  8 05:48:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Oct  8 05:48:26 np0005475493 systemd[1]: Started libpod-conmon-4e74146b2b57a603c2010d087fd0fbded5a5e716ccf6d1d2cacd1530de4140f4.scope.
Oct  8 05:48:26 np0005475493 podman[106296]: 2025-10-08 09:48:26.056692473 +0000 UTC m=+0.020780349 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:48:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Oct  8 05:48:26 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:48:26 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad1da561748051cc71169d7967a8f4d0e5de2411280357829b78c160bd071a70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:48:26 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad1da561748051cc71169d7967a8f4d0e5de2411280357829b78c160bd071a70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:48:26 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad1da561748051cc71169d7967a8f4d0e5de2411280357829b78c160bd071a70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:48:26 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad1da561748051cc71169d7967a8f4d0e5de2411280357829b78c160bd071a70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:48:26 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Oct  8 05:48:26 np0005475493 podman[106296]: 2025-10-08 09:48:26.250052169 +0000 UTC m=+0.214140025 container init 4e74146b2b57a603c2010d087fd0fbded5a5e716ccf6d1d2cacd1530de4140f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_albattani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  8 05:48:26 np0005475493 podman[106296]: 2025-10-08 09:48:26.262253639 +0000 UTC m=+0.226341485 container start 4e74146b2b57a603c2010d087fd0fbded5a5e716ccf6d1d2cacd1530de4140f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct  8 05:48:26 np0005475493 podman[106296]: 2025-10-08 09:48:26.316165449 +0000 UTC m=+0.280253385 container attach 4e74146b2b57a603c2010d087fd0fbded5a5e716ccf6d1d2cacd1530de4140f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_albattani, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:48:26 np0005475493 determined_albattani[106323]: {
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:    "1": [
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:        {
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:            "devices": [
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:                "/dev/loop3"
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:            ],
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:            "lv_name": "ceph_lv0",
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:            "lv_size": "21470642176",
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:            "name": "ceph_lv0",
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:            "tags": {
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:                "ceph.cephx_lockbox_secret": "",
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:                "ceph.cluster_name": "ceph",
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:                "ceph.crush_device_class": "",
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:                "ceph.encrypted": "0",
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:                "ceph.osd_id": "1",
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:                "ceph.type": "block",
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:                "ceph.vdo": "0",
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:                "ceph.with_tpm": "0"
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:            },
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:            "type": "block",
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:            "vg_name": "ceph_vg0"
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:        }
Oct  8 05:48:26 np0005475493 determined_albattani[106323]:    ]
Oct  8 05:48:26 np0005475493 determined_albattani[106323]: }
Oct  8 05:48:26 np0005475493 systemd[1]: libpod-4e74146b2b57a603c2010d087fd0fbded5a5e716ccf6d1d2cacd1530de4140f4.scope: Deactivated successfully.
Oct  8 05:48:26 np0005475493 podman[106296]: 2025-10-08 09:48:26.583352971 +0000 UTC m=+0.547440847 container died 4e74146b2b57a603c2010d087fd0fbded5a5e716ccf6d1d2cacd1530de4140f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_albattani, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:48:26 np0005475493 systemd[1]: var-lib-containers-storage-overlay-ad1da561748051cc71169d7967a8f4d0e5de2411280357829b78c160bd071a70-merged.mount: Deactivated successfully.
Oct  8 05:48:26 np0005475493 python3.9[106434]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:48:26 np0005475493 podman[106296]: 2025-10-08 09:48:26.886384995 +0000 UTC m=+0.850472841 container remove 4e74146b2b57a603c2010d087fd0fbded5a5e716ccf6d1d2cacd1530de4140f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_albattani, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct  8 05:48:26 np0005475493 systemd[1]: libpod-conmon-4e74146b2b57a603c2010d087fd0fbded5a5e716ccf6d1d2cacd1530de4140f4.scope: Deactivated successfully.
Oct  8 05:48:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:26 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:27.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:27.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:27 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Oct  8 05:48:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:27 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:27 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Oct  8 05:48:27 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Oct  8 05:48:27 np0005475493 podman[106602]: 2025-10-08 09:48:27.495091148 +0000 UTC m=+0.091646520 container create f9c8b3fc4a64f6269514c95d0d99444f53a6e9d678377a9b12587d171f3e80f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  8 05:48:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:27 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:27 np0005475493 podman[106602]: 2025-10-08 09:48:27.424487634 +0000 UTC m=+0.021043036 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:48:27 np0005475493 systemd[1]: Started libpod-conmon-f9c8b3fc4a64f6269514c95d0d99444f53a6e9d678377a9b12587d171f3e80f1.scope.
Oct  8 05:48:27 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:48:27 np0005475493 podman[106602]: 2025-10-08 09:48:27.629732921 +0000 UTC m=+0.226288313 container init f9c8b3fc4a64f6269514c95d0d99444f53a6e9d678377a9b12587d171f3e80f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_bhabha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  8 05:48:27 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v46: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:48:27 np0005475493 podman[106602]: 2025-10-08 09:48:27.636272648 +0000 UTC m=+0.232828020 container start f9c8b3fc4a64f6269514c95d0d99444f53a6e9d678377a9b12587d171f3e80f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_bhabha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:48:27 np0005475493 vibrant_bhabha[106687]: 167 167
Oct  8 05:48:27 np0005475493 systemd[1]: libpod-f9c8b3fc4a64f6269514c95d0d99444f53a6e9d678377a9b12587d171f3e80f1.scope: Deactivated successfully.
Oct  8 05:48:27 np0005475493 podman[106602]: 2025-10-08 09:48:27.684015861 +0000 UTC m=+0.280571273 container attach f9c8b3fc4a64f6269514c95d0d99444f53a6e9d678377a9b12587d171f3e80f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_bhabha, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:48:27 np0005475493 podman[106602]: 2025-10-08 09:48:27.684821022 +0000 UTC m=+0.281376404 container died f9c8b3fc4a64f6269514c95d0d99444f53a6e9d678377a9b12587d171f3e80f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_bhabha, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:48:27 np0005475493 systemd[1]: var-lib-containers-storage-overlay-06dd95b1d7eaf74a835513406bd038f36123f2eb5446fad567019a555a2b109a-merged.mount: Deactivated successfully.
Oct  8 05:48:27 np0005475493 podman[106602]: 2025-10-08 09:48:27.964708407 +0000 UTC m=+0.561263819 container remove f9c8b3fc4a64f6269514c95d0d99444f53a6e9d678377a9b12587d171f3e80f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_bhabha, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:48:28 np0005475493 python3.9[106731]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  8 05:48:28 np0005475493 systemd[1]: libpod-conmon-f9c8b3fc4a64f6269514c95d0d99444f53a6e9d678377a9b12587d171f3e80f1.scope: Deactivated successfully.
Oct  8 05:48:28 np0005475493 podman[106746]: 2025-10-08 09:48:28.133068076 +0000 UTC m=+0.072341420 container create 3f3931a0e12108feef7cc63358beb7411d0a1182e10cf1ceee0edc89e5448ba2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:48:28 np0005475493 podman[106746]: 2025-10-08 09:48:28.080615523 +0000 UTC m=+0.019888887 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:48:28 np0005475493 systemd[1]: Started libpod-conmon-3f3931a0e12108feef7cc63358beb7411d0a1182e10cf1ceee0edc89e5448ba2.scope.
Oct  8 05:48:28 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:48:28 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b288599fd60511384094b6d6443b7e523b9fb741e100a0c9d9e9d105a62b871e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:48:28 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b288599fd60511384094b6d6443b7e523b9fb741e100a0c9d9e9d105a62b871e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:48:28 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b288599fd60511384094b6d6443b7e523b9fb741e100a0c9d9e9d105a62b871e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:48:28 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b288599fd60511384094b6d6443b7e523b9fb741e100a0c9d9e9d105a62b871e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:48:28 np0005475493 podman[106746]: 2025-10-08 09:48:28.337630636 +0000 UTC m=+0.276904050 container init 3f3931a0e12108feef7cc63358beb7411d0a1182e10cf1ceee0edc89e5448ba2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_jepsen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:48:28 np0005475493 podman[106746]: 2025-10-08 09:48:28.34446513 +0000 UTC m=+0.283738514 container start 3f3931a0e12108feef7cc63358beb7411d0a1182e10cf1ceee0edc89e5448ba2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:48:28 np0005475493 podman[106746]: 2025-10-08 09:48:28.367336012 +0000 UTC m=+0.306609376 container attach 3f3931a0e12108feef7cc63358beb7411d0a1182e10cf1ceee0edc89e5448ba2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:48:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:48:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:28 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:28 np0005475493 python3.9[106889]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  8 05:48:28 np0005475493 lvm[106914]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 05:48:28 np0005475493 lvm[106914]: VG ceph_vg0 finished
Oct  8 05:48:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:29.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:29 np0005475493 flamboyant_jepsen[106763]: {}
Oct  8 05:48:29 np0005475493 systemd[1]: libpod-3f3931a0e12108feef7cc63358beb7411d0a1182e10cf1ceee0edc89e5448ba2.scope: Deactivated successfully.
Oct  8 05:48:29 np0005475493 systemd[1]: libpod-3f3931a0e12108feef7cc63358beb7411d0a1182e10cf1ceee0edc89e5448ba2.scope: Consumed 1.017s CPU time.
Oct  8 05:48:29 np0005475493 podman[106746]: 2025-10-08 09:48:29.078790438 +0000 UTC m=+1.018063802 container died 3f3931a0e12108feef7cc63358beb7411d0a1182e10cf1ceee0edc89e5448ba2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_jepsen, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:48:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct  8 05:48:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:29.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  8 05:48:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:29 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:29 np0005475493 systemd[1]: var-lib-containers-storage-overlay-b288599fd60511384094b6d6443b7e523b9fb741e100a0c9d9e9d105a62b871e-merged.mount: Deactivated successfully.
Oct  8 05:48:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:29 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:29 np0005475493 podman[106746]: 2025-10-08 09:48:29.56118332 +0000 UTC m=+1.500456664 container remove 3f3931a0e12108feef7cc63358beb7411d0a1182e10cf1ceee0edc89e5448ba2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:48:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:48:29 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v47: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:48:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Oct  8 05:48:29 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  8 05:48:29 np0005475493 systemd[1]: libpod-conmon-3f3931a0e12108feef7cc63358beb7411d0a1182e10cf1ceee0edc89e5448ba2.scope: Deactivated successfully.
Oct  8 05:48:29 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:48:29 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:29 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:48:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Oct  8 05:48:30 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  8 05:48:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Oct  8 05:48:30 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Oct  8 05:48:30 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  8 05:48:30 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:30 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:48:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:30 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:31.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:31.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:31 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140014d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:31 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:31 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v49: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:48:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Oct  8 05:48:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  8 05:48:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Oct  8 05:48:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  8 05:48:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Oct  8 05:48:31 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  8 05:48:31 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  8 05:48:31 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Oct  8 05:48:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Oct  8 05:48:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Oct  8 05:48:32 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Oct  8 05:48:32 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  8 05:48:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:48:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:48:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:32 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 05:48:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:32 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:48:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:32 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:33.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:33.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:33 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:48:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:33 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614001670 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:33 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v52: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:48:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Oct  8 05:48:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Oct  8 05:48:33 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Oct  8 05:48:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Oct  8 05:48:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Oct  8 05:48:34 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Oct  8 05:48:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:34 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:35.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:35.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:35 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:35 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:35 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v55: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:48:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:48:35] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct  8 05:48:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:48:35] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct  8 05:48:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:35 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 05:48:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Oct  8 05:48:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Oct  8 05:48:35 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Oct  8 05:48:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Oct  8 05:48:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:36 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140036a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Oct  8 05:48:37 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Oct  8 05:48:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:37.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:37.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:37 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140036a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:37 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:37 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v58: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:48:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:48:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:38 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:39.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:39.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:39 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140036a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:39 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140036a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:39 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v59: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 348 B/s rd, 174 B/s wr, 0 op/s; 37 B/s, 2 objects/s recovering
Oct  8 05:48:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Oct  8 05:48:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct  8 05:48:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Oct  8 05:48:40 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct  8 05:48:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct  8 05:48:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Oct  8 05:48:40 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Oct  8 05:48:40 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 101 pg[9.10( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=2 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=101 pruub=13.476019859s) [0] r=-1 lpr=101 pi=[54,101)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 262.316497803s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:40 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 101 pg[9.10( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=2 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=101 pruub=13.475879669s) [0] r=-1 lpr=101 pi=[54,101)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 262.316497803s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:48:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:40 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:41.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:41 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct  8 05:48:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:41.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:41 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Oct  8 05:48:41 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Oct  8 05:48:41 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Oct  8 05:48:41 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 102 pg[9.10( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=2 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=102) [0]/[1] r=0 lpr=102 pi=[54,102)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:41 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 102 pg[9.10( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=2 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=102) [0]/[1] r=0 lpr=102 pi=[54,102)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  8 05:48:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:41 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:41 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140036a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:41 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v62: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 358 B/s rd, 179 B/s wr, 0 op/s; 38 B/s, 2 objects/s recovering
Oct  8 05:48:41 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Oct  8 05:48:41 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct  8 05:48:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Oct  8 05:48:42 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct  8 05:48:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct  8 05:48:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Oct  8 05:48:42 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Oct  8 05:48:42 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 103 pg[9.11( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=103 pruub=12.052393913s) [0] r=-1 lpr=103 pi=[54,103)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 262.316528320s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:42 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 103 pg[9.11( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=103 pruub=12.052357674s) [0] r=-1 lpr=103 pi=[54,103)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 262.316528320s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:48:42 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 103 pg[9.10( v 45'1018 (0'0,45'1018] local-lis/les=102/103 n=2 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=102) [0]/[1] async=[0] r=0 lpr=102 pi=[54,102)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:48:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/094842 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 05:48:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:42 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66140036a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:43.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:43.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Oct  8 05:48:43 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct  8 05:48:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Oct  8 05:48:43 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Oct  8 05:48:43 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 104 pg[9.10( v 45'1018 (0'0,45'1018] local-lis/les=102/103 n=2 ec=54/38 lis/c=102/54 les/c/f=103/56/0 sis=104 pruub=14.987093925s) [0] async=[0] r=-1 lpr=104 pi=[54,104)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 266.272033691s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:43 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 104 pg[9.10( v 45'1018 (0'0,45'1018] local-lis/les=102/103 n=2 ec=54/38 lis/c=102/54 les/c/f=103/56/0 sis=104 pruub=14.987010956s) [0] r=-1 lpr=104 pi=[54,104)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 266.272033691s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:48:43 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 104 pg[9.11( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=104) [0]/[1] r=0 lpr=104 pi=[54,104)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:43 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 104 pg[9.11( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=104) [0]/[1] r=0 lpr=104 pi=[54,104)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  8 05:48:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:43 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:48:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:43 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600002550 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:43 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v65: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s; 0 B/s, 0 objects/s recovering
Oct  8 05:48:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Oct  8 05:48:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Oct  8 05:48:44 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Oct  8 05:48:44 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 105 pg[9.11( v 45'1018 (0'0,45'1018] local-lis/les=104/105 n=5 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=104) [0]/[1] async=[0] r=0 lpr=104 pi=[54,104)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:48:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:44 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:45.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:45.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:45 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Oct  8 05:48:45 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Oct  8 05:48:45 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Oct  8 05:48:45 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 106 pg[9.11( v 45'1018 (0'0,45'1018] local-lis/les=104/105 n=5 ec=54/38 lis/c=104/54 les/c/f=105/56/0 sis=106 pruub=14.974084854s) [0] async=[0] r=-1 lpr=106 pi=[54,106)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 268.305603027s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:45 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 106 pg[9.11( v 45'1018 (0'0,45'1018] local-lis/les=104/105 n=5 ec=54/38 lis/c=104/54 les/c/f=105/56/0 sis=106 pruub=14.974020958s) [0] r=-1 lpr=106 pi=[54,106)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 268.305603027s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:48:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:45 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:45 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v68: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s; 0 B/s, 0 objects/s recovering
Oct  8 05:48:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:48:45] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct  8 05:48:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:48:45] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct  8 05:48:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Oct  8 05:48:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Oct  8 05:48:46 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Oct  8 05:48:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:46 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600002550 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:47.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:47.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:47 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:47 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:48:47
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Some PGs (0.005666) are inactive; try again later
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v70: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 230 B/s rd, 0 B/s wr, 0 op/s; 0 B/s, 0 objects/s recovering
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 05:48:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:48:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:48:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:48:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 05:48:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:48:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:48:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:48:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:48:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:48:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:48 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:49.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:49.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:49 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600002550 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:49 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:49 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v71: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Oct  8 05:48:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Oct  8 05:48:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct  8 05:48:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Oct  8 05:48:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct  8 05:48:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Oct  8 05:48:49 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct  8 05:48:49 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Oct  8 05:48:50 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 108 pg[9.12( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=108 pruub=11.404694557s) [0] r=-1 lpr=108 pi=[54,108)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 270.319274902s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:50 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 108 pg[9.12( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=108 pruub=11.404635429s) [0] r=-1 lpr=108 pi=[54,108)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 270.319274902s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:48:50 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Oct  8 05:48:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:50 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:51 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct  8 05:48:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:51.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Oct  8 05:48:51 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Oct  8 05:48:51 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 109 pg[9.12( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=109) [0]/[1] r=0 lpr=109 pi=[54,109)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:51 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 109 pg[9.12( v 45'1018 (0'0,45'1018] local-lis/les=54/56 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=109) [0]/[1] r=0 lpr=109 pi=[54,109)/1 crt=45'1018 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  8 05:48:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:51.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:51 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:51 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600002550 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:51 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v74: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Oct  8 05:48:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Oct  8 05:48:51 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct  8 05:48:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Oct  8 05:48:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct  8 05:48:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Oct  8 05:48:52 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Oct  8 05:48:52 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct  8 05:48:52 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 110 pg[9.12( v 45'1018 (0'0,45'1018] local-lis/les=109/110 n=4 ec=54/38 lis/c=54/54 les/c/f=56/56/0 sis=109) [0]/[1] async=[0] r=0 lpr=109 pi=[54,109)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:48:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:52 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:53.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Oct  8 05:48:53 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct  8 05:48:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:53.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Oct  8 05:48:53 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Oct  8 05:48:53 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 111 pg[9.12( v 45'1018 (0'0,45'1018] local-lis/les=109/110 n=4 ec=54/38 lis/c=109/54 les/c/f=110/56/0 sis=111 pruub=15.415892601s) [0] async=[0] r=-1 lpr=111 pi=[54,111)/1 crt=45'1018 lcod 0'0 mlcod 0'0 active pruub 276.629608154s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:48:53 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 111 pg[9.12( v 45'1018 (0'0,45'1018] local-lis/les=109/110 n=4 ec=54/38 lis/c=109/54 les/c/f=110/56/0 sis=111 pruub=15.415235519s) [0] r=-1 lpr=111 pi=[54,111)/1 crt=45'1018 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 276.629608154s@ mbc={}] state<Start>: transitioning to Stray
Oct  8 05:48:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:53 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:48:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:53 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:53 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v77: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Oct  8 05:48:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Oct  8 05:48:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Oct  8 05:48:54 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Oct  8 05:48:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:54 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600002550 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:55.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:55.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:55 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:55 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:55 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v79: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 224 B/s rd, 0 op/s; 24 B/s, 0 objects/s recovering
Oct  8 05:48:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:48:55] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Oct  8 05:48:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:48:55] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Oct  8 05:48:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:56 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:57.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:57.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:57 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66000032e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:57 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003630 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:57 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v80: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Oct  8 05:48:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:48:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:58 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:48:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:48:59.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:48:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:48:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:48:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:48:59.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:48:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:59 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:48:59 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66000032e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:48:59 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v81: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 406 B/s rd, 0 op/s; 14 B/s, 0 objects/s recovering
Oct  8 05:48:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Oct  8 05:48:59 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct  8 05:48:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Oct  8 05:48:59 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct  8 05:48:59 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct  8 05:48:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Oct  8 05:48:59 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Oct  8 05:49:00 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct  8 05:49:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:00 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003630 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:01.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:01.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:01 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:01 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:01 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v83: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:49:01 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Oct  8 05:49:01 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct  8 05:49:01 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Oct  8 05:49:01 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct  8 05:49:01 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct  8 05:49:01 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Oct  8 05:49:01 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Oct  8 05:49:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Oct  8 05:49:02 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct  8 05:49:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:49:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:49:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Oct  8 05:49:02 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Oct  8 05:49:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:02 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66000032e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:03.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:49:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:03.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:49:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:03 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003630 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:49:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:03 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:03 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v86: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct  8 05:49:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Oct  8 05:49:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Oct  8 05:49:03 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Oct  8 05:49:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Oct  8 05:49:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Oct  8 05:49:04 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Oct  8 05:49:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:04 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:05.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:05.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:05 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600004380 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:05 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003630 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:05 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v89: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:49:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:49:05] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct  8 05:49:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:49:05] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct  8 05:49:05 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Oct  8 05:49:06 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Oct  8 05:49:06 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Oct  8 05:49:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:06 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:07.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:07.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:07 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:07 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600004380 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:07 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v91: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 212 B/s rd, 0 op/s
Oct  8 05:49:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  8 05:49:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:09 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003630 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:49:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:09.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:49:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:09.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:09 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:09 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:09 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v92: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Oct  8 05:49:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Oct  8 05:49:09 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct  8 05:49:10 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Oct  8 05:49:10 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct  8 05:49:10 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Oct  8 05:49:10 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Oct  8 05:49:10 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct  8 05:49:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:11 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600004380 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:11.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:11 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Oct  8 05:49:11 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct  8 05:49:11 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Oct  8 05:49:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:11.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:11 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Oct  8 05:49:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:11 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003630 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:11 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:11 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v95: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Oct  8 05:49:11 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Oct  8 05:49:11 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct  8 05:49:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Oct  8 05:49:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct  8 05:49:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Oct  8 05:49:12 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct  8 05:49:12 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Oct  8 05:49:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:13 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:49:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:13.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:49:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:49:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:13.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:49:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Oct  8 05:49:13 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct  8 05:49:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Oct  8 05:49:13 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Oct  8 05:49:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:13 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6600004380 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:49:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:13 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003630 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:13 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v98: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Oct  8 05:49:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Oct  8 05:49:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Oct  8 05:49:14 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Oct  8 05:49:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:15 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:49:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:15.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:49:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:15.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:15 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:15 np0005475493 python3.9[107358]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:49:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:15 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614001ef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:15 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v100: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 227 B/s rd, 0 op/s; 24 B/s, 0 objects/s recovering
Oct  8 05:49:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:49:15] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct  8 05:49:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:49:15] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct  8 05:49:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:17 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608000f30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:17 np0005475493 ceph-mgr[73869]: [dashboard INFO request] [192.168.122.100:52740] [POST] [200] [0.118s] [4.0B] [8d746302-ff19-4c72-b43b-3193d3c1e5e8] /api/prometheus_receiver
Oct  8 05:49:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:49:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:17.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:49:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:49:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:17.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:49:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:17 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:17 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:17 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v101: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Oct  8 05:49:17 np0005475493 python3.9[107649]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct  8 05:49:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:49:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:49:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:49:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:49:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:49:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fa0e4fbbd00>)]
Oct  8 05:49:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Oct  8 05:49:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:49:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fa0e4fbb8e0>)]
Oct  8 05:49:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Oct  8 05:49:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:49:18 np0005475493 python3.9[107802]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct  8 05:49:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:19 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:19.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:19.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:19 np0005475493 python3.9[107955]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:49:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:19 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608000f30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:19 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:19 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v102: 353 pgs: 353 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 410 B/s rd, 410 B/s wr, 0 op/s; 14 B/s, 0 objects/s recovering
Oct  8 05:49:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Oct  8 05:49:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct  8 05:49:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Oct  8 05:49:19 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct  8 05:49:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct  8 05:49:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Oct  8 05:49:20 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Oct  8 05:49:20 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : mgrmap e32: compute-0.ixicfj(active, since 92s), standbys: compute-2.mtagwx, compute-1.swlvov
Oct  8 05:49:20 np0005475493 python3.9[108108]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct  8 05:49:21 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct  8 05:49:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:21 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:21.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:21.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:21 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:21 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608000f30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:21 np0005475493 python3.9[108286]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:49:21 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v104: 353 pgs: 353 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 383 B/s wr, 0 op/s
Oct  8 05:49:21 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Oct  8 05:49:21 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct  8 05:49:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Oct  8 05:49:22 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct  8 05:49:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct  8 05:49:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Oct  8 05:49:22 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Oct  8 05:49:22 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 125 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=81/81 les/c/f=82/82/0 sis=125) [1] r=0 lpr=125 pi=[81,125)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:49:22 np0005475493 python3.9[108439]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:49:22 np0005475493 python3.9[108517]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:49:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:23 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Oct  8 05:49:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Oct  8 05:49:23 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Oct  8 05:49:23 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 126 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=81/81 les/c/f=82/82/0 sis=126) [1]/[2] r=-1 lpr=126 pi=[81,126)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:49:23 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 126 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=81/81 les/c/f=82/82/0 sis=126) [1]/[2] r=-1 lpr=126 pi=[81,126)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  8 05:49:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:49:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:23.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:49:23 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct  8 05:49:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:23.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:23 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:49:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:23 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:23 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v107: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 511 B/s wr, 0 op/s
Oct  8 05:49:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Oct  8 05:49:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Oct  8 05:49:24 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Oct  8 05:49:24 np0005475493 python3.9[108671]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct  8 05:49:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:25 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66080032f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:25.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Oct  8 05:49:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Oct  8 05:49:25 np0005475493 python3.9[108824]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct  8 05:49:25 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Oct  8 05:49:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:25.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:25 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 128 pg[9.19( v 45'1018 (0'0,45'1018] local-lis/les=0/0 n=7 ec=54/38 lis/c=126/81 les/c/f=127/82/0 sis=128) [1] r=0 lpr=128 pi=[81,128)/1 luod=0'0 crt=45'1018 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:49:25 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 128 pg[9.19( v 45'1018 (0'0,45'1018] local-lis/les=0/0 n=7 ec=54/38 lis/c=126/81 les/c/f=127/82/0 sis=128) [1] r=0 lpr=128 pi=[81,128)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:49:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:25 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:25 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:25 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v110: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:49:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:49:25] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Oct  8 05:49:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:49:25] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Oct  8 05:49:26 np0005475493 python3.9[108978]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  8 05:49:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Oct  8 05:49:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Oct  8 05:49:26 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Oct  8 05:49:26 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 129 pg[9.19( v 45'1018 (0'0,45'1018] local-lis/les=128/129 n=7 ec=54/38 lis/c=126/81 les/c/f=127/82/0 sis=128) [1] r=0 lpr=128 pi=[81,128)/1 crt=45'1018 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:49:26 np0005475493 python3.9[109131]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct  8 05:49:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:49:26.953Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:49:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:49:26.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:49:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:27 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:49:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:27.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:49:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:27.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:27 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:27 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:27 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v112: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 224 B/s rd, 0 op/s
Oct  8 05:49:27 np0005475493 python3.9[109284]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  8 05:49:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:49:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:29 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66080032f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:29.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:29.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:29 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:29 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:29 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v113: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Oct  8 05:49:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Oct  8 05:49:29 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct  8 05:49:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Oct  8 05:49:29 np0005475493 python3.9[109439]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:49:29 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct  8 05:49:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Oct  8 05:49:29 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct  8 05:49:29 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Oct  8 05:49:29 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 130 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=86/86 les/c/f=87/87/0 sis=130) [1] r=0 lpr=130 pi=[86,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:49:30 np0005475493 python3.9[109642]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:49:30 np0005475493 podman[109739]: 2025-10-08 09:49:30.630838013 +0000 UTC m=+0.054208407 container exec 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  8 05:49:30 np0005475493 podman[109739]: 2025-10-08 09:49:30.727475165 +0000 UTC m=+0.150845529 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:49:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Oct  8 05:49:30 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct  8 05:49:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Oct  8 05:49:30 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Oct  8 05:49:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 131 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=86/86 les/c/f=87/87/0 sis=131) [1]/[0] r=-1 lpr=131 pi=[86,131)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:49:30 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 131 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=86/86 les/c/f=87/87/0 sis=131) [1]/[0] r=-1 lpr=131 pi=[86,131)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  8 05:49:30 np0005475493 python3.9[109812]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:49:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:31 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:49:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:31.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:49:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:31.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:31 np0005475493 podman[109997]: 2025-10-08 09:49:31.222917738 +0000 UTC m=+0.050502294 container exec 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:49:31 np0005475493 podman[109997]: 2025-10-08 09:49:31.234518305 +0000 UTC m=+0.062102861 container exec_died 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:49:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:31 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66080032f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:31 np0005475493 podman[110157]: 2025-10-08 09:49:31.477841345 +0000 UTC m=+0.052381697 container exec c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:49:31 np0005475493 podman[110157]: 2025-10-08 09:49:31.491394666 +0000 UTC m=+0.065934988 container exec_died c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:49:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:31 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:31 np0005475493 python3.9[110156]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:49:31 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v116: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Oct  8 05:49:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Oct  8 05:49:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct  8 05:49:31 np0005475493 podman[110224]: 2025-10-08 09:49:31.684507853 +0000 UTC m=+0.048741276 container exec 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct  8 05:49:31 np0005475493 podman[110224]: 2025-10-08 09:49:31.741757132 +0000 UTC m=+0.105990535 container exec_died 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct  8 05:49:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Oct  8 05:49:31 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct  8 05:49:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct  8 05:49:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Oct  8 05:49:31 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Oct  8 05:49:31 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 132 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=65/65 les/c/f=66/66/0 sis=132) [1] r=0 lpr=132 pi=[65,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:49:31 np0005475493 podman[110367]: 2025-10-08 09:49:31.934871768 +0000 UTC m=+0.051715015 container exec 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., version=2.2.4, com.redhat.component=keepalived-container, name=keepalived, release=1793, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public)
Oct  8 05:49:31 np0005475493 podman[110367]: 2025-10-08 09:49:31.970959931 +0000 UTC m=+0.087803158 container exec_died 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, release=1793, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, name=keepalived, architecture=x86_64, vcs-type=git, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Oct  8 05:49:32 np0005475493 python3.9[110353]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:49:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/094932 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 05:49:32 np0005475493 podman[110456]: 2025-10-08 09:49:32.18727457 +0000 UTC m=+0.052024165 container exec feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:49:32 np0005475493 podman[110456]: 2025-10-08 09:49:32.211378754 +0000 UTC m=+0.076128239 container exec_died feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:49:32 np0005475493 podman[110528]: 2025-10-08 09:49:32.438943359 +0000 UTC m=+0.055804172 container exec 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:49:32 np0005475493 podman[110528]: 2025-10-08 09:49:32.606502034 +0000 UTC m=+0.223362837 container exec_died 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:49:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:49:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:49:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Oct  8 05:49:32 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct  8 05:49:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Oct  8 05:49:32 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Oct  8 05:49:32 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 133 pg[9.1a( v 45'1018 (0'0,45'1018] local-lis/les=0/0 n=4 ec=54/38 lis/c=131/86 les/c/f=132/87/0 sis=133) [1] r=0 lpr=133 pi=[86,133)/1 luod=0'0 crt=45'1018 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:49:32 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 133 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=65/65 les/c/f=66/66/0 sis=133) [1]/[2] r=-1 lpr=133 pi=[65,133)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:49:32 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 133 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=65/65 les/c/f=66/66/0 sis=133) [1]/[2] r=-1 lpr=133 pi=[65,133)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  8 05:49:32 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 133 pg[9.1a( v 45'1018 (0'0,45'1018] local-lis/les=0/0 n=4 ec=54/38 lis/c=131/86 les/c/f=132/87/0 sis=133) [1] r=0 lpr=133 pi=[86,133)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:49:32 np0005475493 podman[110767]: 2025-10-08 09:49:32.949816496 +0000 UTC m=+0.065073090 container exec 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:49:32 np0005475493 podman[110767]: 2025-10-08 09:49:32.99764658 +0000 UTC m=+0.112903174 container exec_died 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:49:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:33 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003630 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:49:33 np0005475493 python3.9[110759]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:49:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:49:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:33.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:49:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:33.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:33 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:49:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:33 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66080032f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:33 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v119: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Oct  8 05:49:33 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Oct  8 05:49:33 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 134 pg[9.1a( v 45'1018 (0'0,45'1018] local-lis/les=133/134 n=4 ec=54/38 lis/c=131/86 les/c/f=132/87/0 sis=133) [1] r=0 lpr=133 pi=[86,133)/1 crt=45'1018 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.109571) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916974109671, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2762, "num_deletes": 251, "total_data_size": 6580274, "memory_usage": 6685736, "flush_reason": "Manual Compaction"}
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916974150444, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 6136445, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8018, "largest_seqno": 10779, "table_properties": {"data_size": 6123232, "index_size": 8555, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3589, "raw_key_size": 31061, "raw_average_key_size": 21, "raw_value_size": 6095197, "raw_average_value_size": 4304, "num_data_blocks": 374, "num_entries": 1416, "num_filter_entries": 1416, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916868, "oldest_key_time": 1759916868, "file_creation_time": 1759916974, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 40884 microseconds, and 10022 cpu microseconds.
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.150494) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 6136445 bytes OK
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.150514) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.152124) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.152136) EVENT_LOG_v1 {"time_micros": 1759916974152133, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.152152) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 6567733, prev total WAL file size 6567733, number of live WAL files 2.
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.153487) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(5992KB)], [23(11MB)]
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916974153534, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 18138861, "oldest_snapshot_seqno": -1}
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 4032 keys, 14239944 bytes, temperature: kUnknown
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916974261555, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 14239944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14207795, "index_size": 20967, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 103008, "raw_average_key_size": 25, "raw_value_size": 14128784, "raw_average_value_size": 3504, "num_data_blocks": 900, "num_entries": 4032, "num_filter_entries": 4032, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759916974, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.261771) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 14239944 bytes
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.271691) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.8 rd, 131.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.9, 11.4 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(5.3) write-amplify(2.3) OK, records in: 4564, records dropped: 532 output_compression: NoCompression
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.271737) EVENT_LOG_v1 {"time_micros": 1759916974271720, "job": 8, "event": "compaction_finished", "compaction_time_micros": 108078, "compaction_time_cpu_micros": 28401, "output_level": 6, "num_output_files": 1, "total_output_size": 14239944, "num_input_records": 4564, "num_output_records": 4032, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916974272898, "job": 8, "event": "table_file_deletion", "file_number": 25}
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759916974275131, "job": 8, "event": "table_file_deletion", "file_number": 23}
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.153392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.275166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.275170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.275172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.275173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:49:34.275174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 05:49:34 np0005475493 podman[110994]: 2025-10-08 09:49:34.421846609 +0000 UTC m=+0.040460589 container create 89f09d6f8780091f2af2f0dc4820d443aef08b43dfa41a46b761aacfa41b92f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_chatterjee, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:49:34 np0005475493 systemd[1]: Started libpod-conmon-89f09d6f8780091f2af2f0dc4820d443aef08b43dfa41a46b761aacfa41b92f8.scope.
Oct  8 05:49:34 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:49:34 np0005475493 podman[110994]: 2025-10-08 09:49:34.487727445 +0000 UTC m=+0.106341435 container init 89f09d6f8780091f2af2f0dc4820d443aef08b43dfa41a46b761aacfa41b92f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_chatterjee, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct  8 05:49:34 np0005475493 podman[110994]: 2025-10-08 09:49:34.494039335 +0000 UTC m=+0.112653315 container start 89f09d6f8780091f2af2f0dc4820d443aef08b43dfa41a46b761aacfa41b92f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_chatterjee, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct  8 05:49:34 np0005475493 podman[110994]: 2025-10-08 09:49:34.497966746 +0000 UTC m=+0.116580776 container attach 89f09d6f8780091f2af2f0dc4820d443aef08b43dfa41a46b761aacfa41b92f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct  8 05:49:34 np0005475493 funny_chatterjee[111026]: 167 167
Oct  8 05:49:34 np0005475493 systemd[1]: libpod-89f09d6f8780091f2af2f0dc4820d443aef08b43dfa41a46b761aacfa41b92f8.scope: Deactivated successfully.
Oct  8 05:49:34 np0005475493 podman[110994]: 2025-10-08 09:49:34.499314782 +0000 UTC m=+0.117928762 container died 89f09d6f8780091f2af2f0dc4820d443aef08b43dfa41a46b761aacfa41b92f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_chatterjee, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  8 05:49:34 np0005475493 podman[110994]: 2025-10-08 09:49:34.407287004 +0000 UTC m=+0.025901014 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:49:34 np0005475493 systemd[1]: var-lib-containers-storage-overlay-fc1b60c698e92c72ad9c206b5601ff42a9ac402d82892ed03471eb83e6d2b0cd-merged.mount: Deactivated successfully.
Oct  8 05:49:34 np0005475493 podman[110994]: 2025-10-08 09:49:34.557054566 +0000 UTC m=+0.175668546 container remove 89f09d6f8780091f2af2f0dc4820d443aef08b43dfa41a46b761aacfa41b92f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct  8 05:49:34 np0005475493 systemd[1]: libpod-conmon-89f09d6f8780091f2af2f0dc4820d443aef08b43dfa41a46b761aacfa41b92f8.scope: Deactivated successfully.
Oct  8 05:49:34 np0005475493 podman[111052]: 2025-10-08 09:49:34.722758289 +0000 UTC m=+0.043245823 container create 49d2dbc852b0006646ebad04ea83d02d10db610dd795e040e3cde60964bd81ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_elion, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:49:34 np0005475493 systemd[1]: Started libpod-conmon-49d2dbc852b0006646ebad04ea83d02d10db610dd795e040e3cde60964bd81ab.scope.
Oct  8 05:49:34 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:49:34 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f0e0fe779506391a5bf4882eeb228fa413ae6eb8d0177ac901fa9b827aad74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:49:34 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f0e0fe779506391a5bf4882eeb228fa413ae6eb8d0177ac901fa9b827aad74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:49:34 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f0e0fe779506391a5bf4882eeb228fa413ae6eb8d0177ac901fa9b827aad74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:49:34 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f0e0fe779506391a5bf4882eeb228fa413ae6eb8d0177ac901fa9b827aad74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:49:34 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f0e0fe779506391a5bf4882eeb228fa413ae6eb8d0177ac901fa9b827aad74/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:49:34 np0005475493 podman[111052]: 2025-10-08 09:49:34.789900967 +0000 UTC m=+0.110388601 container init 49d2dbc852b0006646ebad04ea83d02d10db610dd795e040e3cde60964bd81ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:49:34 np0005475493 podman[111052]: 2025-10-08 09:49:34.701578413 +0000 UTC m=+0.022065977 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:49:34 np0005475493 podman[111052]: 2025-10-08 09:49:34.798063699 +0000 UTC m=+0.118551263 container start 49d2dbc852b0006646ebad04ea83d02d10db610dd795e040e3cde60964bd81ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_elion, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:49:34 np0005475493 podman[111052]: 2025-10-08 09:49:34.801845435 +0000 UTC m=+0.122332989 container attach 49d2dbc852b0006646ebad04ea83d02d10db610dd795e040e3cde60964bd81ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_elion, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Oct  8 05:49:34 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Oct  8 05:49:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 135 pg[9.1b( v 45'1018 (0'0,45'1018] local-lis/les=0/0 n=2 ec=54/38 lis/c=133/65 les/c/f=134/66/0 sis=135) [1] r=0 lpr=135 pi=[65,135)/1 luod=0'0 crt=45'1018 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:49:34 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 135 pg[9.1b( v 45'1018 (0'0,45'1018] local-lis/les=0/0 n=2 ec=54/38 lis/c=133/65 les/c/f=134/66/0 sis=135) [1] r=0 lpr=135 pi=[65,135)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:49:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:35 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:35 np0005475493 beautiful_elion[111068]: --> passed data devices: 0 physical, 1 LVM
Oct  8 05:49:35 np0005475493 beautiful_elion[111068]: --> All data devices are unavailable
Oct  8 05:49:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:35.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:35 np0005475493 systemd[1]: libpod-49d2dbc852b0006646ebad04ea83d02d10db610dd795e040e3cde60964bd81ab.scope: Deactivated successfully.
Oct  8 05:49:35 np0005475493 podman[111052]: 2025-10-08 09:49:35.143391549 +0000 UTC m=+0.463879143 container died 49d2dbc852b0006646ebad04ea83d02d10db610dd795e040e3cde60964bd81ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_elion, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  8 05:49:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:35.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:35 np0005475493 systemd[1]: var-lib-containers-storage-overlay-63f0e0fe779506391a5bf4882eeb228fa413ae6eb8d0177ac901fa9b827aad74-merged.mount: Deactivated successfully.
Oct  8 05:49:35 np0005475493 python3.9[111205]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:49:35 np0005475493 podman[111052]: 2025-10-08 09:49:35.230616756 +0000 UTC m=+0.551104300 container remove 49d2dbc852b0006646ebad04ea83d02d10db610dd795e040e3cde60964bd81ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_elion, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct  8 05:49:35 np0005475493 systemd[1]: libpod-conmon-49d2dbc852b0006646ebad04ea83d02d10db610dd795e040e3cde60964bd81ab.scope: Deactivated successfully.
Oct  8 05:49:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:35 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003630 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:35 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:35 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v122: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Oct  8 05:49:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:49:35] "GET /metrics HTTP/1.1" 200 48250 "" "Prometheus/2.51.0"
Oct  8 05:49:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:49:35] "GET /metrics HTTP/1.1" 200 48250 "" "Prometheus/2.51.0"
Oct  8 05:49:35 np0005475493 podman[111430]: 2025-10-08 09:49:35.845170389 +0000 UTC m=+0.041408551 container create 41c46bae905d0794000789a2ff32bed9a191da350d6214fa805686e94b2803bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:49:35 np0005475493 systemd[1]: Started libpod-conmon-41c46bae905d0794000789a2ff32bed9a191da350d6214fa805686e94b2803bc.scope.
Oct  8 05:49:35 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:49:35 np0005475493 podman[111430]: 2025-10-08 09:49:35.82869776 +0000 UTC m=+0.024935942 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:49:35 np0005475493 podman[111430]: 2025-10-08 09:49:35.930893096 +0000 UTC m=+0.127131318 container init 41c46bae905d0794000789a2ff32bed9a191da350d6214fa805686e94b2803bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_beaver, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  8 05:49:35 np0005475493 podman[111430]: 2025-10-08 09:49:35.939201703 +0000 UTC m=+0.135439875 container start 41c46bae905d0794000789a2ff32bed9a191da350d6214fa805686e94b2803bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_beaver, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  8 05:49:35 np0005475493 jolly_beaver[111483]: 167 167
Oct  8 05:49:35 np0005475493 systemd[1]: libpod-41c46bae905d0794000789a2ff32bed9a191da350d6214fa805686e94b2803bc.scope: Deactivated successfully.
Oct  8 05:49:35 np0005475493 podman[111430]: 2025-10-08 09:49:35.944551591 +0000 UTC m=+0.140789833 container attach 41c46bae905d0794000789a2ff32bed9a191da350d6214fa805686e94b2803bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_beaver, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:49:35 np0005475493 podman[111430]: 2025-10-08 09:49:35.946165766 +0000 UTC m=+0.142403968 container died 41c46bae905d0794000789a2ff32bed9a191da350d6214fa805686e94b2803bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_beaver, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:49:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Oct  8 05:49:35 np0005475493 systemd[1]: var-lib-containers-storage-overlay-aa69376e6181cd047839dade1d1e883cf1b9b3645683a2f959c140e6514ff7b8-merged.mount: Deactivated successfully.
Oct  8 05:49:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Oct  8 05:49:35 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Oct  8 05:49:35 np0005475493 podman[111430]: 2025-10-08 09:49:35.989877562 +0000 UTC m=+0.186115724 container remove 41c46bae905d0794000789a2ff32bed9a191da350d6214fa805686e94b2803bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct  8 05:49:35 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 136 pg[9.1b( v 45'1018 (0'0,45'1018] local-lis/les=135/136 n=2 ec=54/38 lis/c=133/65 les/c/f=134/66/0 sis=135) [1] r=0 lpr=135 pi=[65,135)/1 crt=45'1018 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:49:36 np0005475493 systemd[1]: libpod-conmon-41c46bae905d0794000789a2ff32bed9a191da350d6214fa805686e94b2803bc.scope: Deactivated successfully.
Oct  8 05:49:36 np0005475493 python3.9[111482]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct  8 05:49:36 np0005475493 podman[111506]: 2025-10-08 09:49:36.140302676 +0000 UTC m=+0.038892668 container create 0d304d740120f68b084273dd917ac36c3a54b575d9b8f56b73226d7a31483ecd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_wing, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS)
Oct  8 05:49:36 np0005475493 systemd[1]: Started libpod-conmon-0d304d740120f68b084273dd917ac36c3a54b575d9b8f56b73226d7a31483ecd.scope.
Oct  8 05:49:36 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:49:36 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc2dbda6f0ea58cac8d99a3ceb253a319d41270f3976bf327c1792cac528a64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:49:36 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc2dbda6f0ea58cac8d99a3ceb253a319d41270f3976bf327c1792cac528a64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:49:36 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc2dbda6f0ea58cac8d99a3ceb253a319d41270f3976bf327c1792cac528a64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:49:36 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc2dbda6f0ea58cac8d99a3ceb253a319d41270f3976bf327c1792cac528a64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:49:36 np0005475493 podman[111506]: 2025-10-08 09:49:36.21393929 +0000 UTC m=+0.112529312 container init 0d304d740120f68b084273dd917ac36c3a54b575d9b8f56b73226d7a31483ecd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:49:36 np0005475493 podman[111506]: 2025-10-08 09:49:36.12512543 +0000 UTC m=+0.023715452 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:49:36 np0005475493 podman[111506]: 2025-10-08 09:49:36.224845094 +0000 UTC m=+0.123435096 container start 0d304d740120f68b084273dd917ac36c3a54b575d9b8f56b73226d7a31483ecd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:49:36 np0005475493 podman[111506]: 2025-10-08 09:49:36.22832953 +0000 UTC m=+0.126919542 container attach 0d304d740120f68b084273dd917ac36c3a54b575d9b8f56b73226d7a31483ecd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_wing, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:49:36 np0005475493 confident_wing[111547]: {
Oct  8 05:49:36 np0005475493 confident_wing[111547]:    "1": [
Oct  8 05:49:36 np0005475493 confident_wing[111547]:        {
Oct  8 05:49:36 np0005475493 confident_wing[111547]:            "devices": [
Oct  8 05:49:36 np0005475493 confident_wing[111547]:                "/dev/loop3"
Oct  8 05:49:36 np0005475493 confident_wing[111547]:            ],
Oct  8 05:49:36 np0005475493 confident_wing[111547]:            "lv_name": "ceph_lv0",
Oct  8 05:49:36 np0005475493 confident_wing[111547]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:49:36 np0005475493 confident_wing[111547]:            "lv_size": "21470642176",
Oct  8 05:49:36 np0005475493 confident_wing[111547]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 05:49:36 np0005475493 confident_wing[111547]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:49:36 np0005475493 confident_wing[111547]:            "name": "ceph_lv0",
Oct  8 05:49:36 np0005475493 confident_wing[111547]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:49:36 np0005475493 confident_wing[111547]:            "tags": {
Oct  8 05:49:36 np0005475493 confident_wing[111547]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:49:36 np0005475493 confident_wing[111547]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:49:36 np0005475493 confident_wing[111547]:                "ceph.cephx_lockbox_secret": "",
Oct  8 05:49:36 np0005475493 confident_wing[111547]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 05:49:36 np0005475493 confident_wing[111547]:                "ceph.cluster_name": "ceph",
Oct  8 05:49:36 np0005475493 confident_wing[111547]:                "ceph.crush_device_class": "",
Oct  8 05:49:36 np0005475493 confident_wing[111547]:                "ceph.encrypted": "0",
Oct  8 05:49:36 np0005475493 confident_wing[111547]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 05:49:36 np0005475493 confident_wing[111547]:                "ceph.osd_id": "1",
Oct  8 05:49:36 np0005475493 confident_wing[111547]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 05:49:36 np0005475493 confident_wing[111547]:                "ceph.type": "block",
Oct  8 05:49:36 np0005475493 confident_wing[111547]:                "ceph.vdo": "0",
Oct  8 05:49:36 np0005475493 confident_wing[111547]:                "ceph.with_tpm": "0"
Oct  8 05:49:36 np0005475493 confident_wing[111547]:            },
Oct  8 05:49:36 np0005475493 confident_wing[111547]:            "type": "block",
Oct  8 05:49:36 np0005475493 confident_wing[111547]:            "vg_name": "ceph_vg0"
Oct  8 05:49:36 np0005475493 confident_wing[111547]:        }
Oct  8 05:49:36 np0005475493 confident_wing[111547]:    ]
Oct  8 05:49:36 np0005475493 confident_wing[111547]: }
Oct  8 05:49:36 np0005475493 systemd[1]: libpod-0d304d740120f68b084273dd917ac36c3a54b575d9b8f56b73226d7a31483ecd.scope: Deactivated successfully.
Oct  8 05:49:36 np0005475493 podman[111506]: 2025-10-08 09:49:36.527523452 +0000 UTC m=+0.426113454 container died 0d304d740120f68b084273dd917ac36c3a54b575d9b8f56b73226d7a31483ecd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:49:36 np0005475493 systemd[1]: var-lib-containers-storage-overlay-7cc2dbda6f0ea58cac8d99a3ceb253a319d41270f3976bf327c1792cac528a64-merged.mount: Deactivated successfully.
Oct  8 05:49:36 np0005475493 podman[111506]: 2025-10-08 09:49:36.60393733 +0000 UTC m=+0.502527332 container remove 0d304d740120f68b084273dd917ac36c3a54b575d9b8f56b73226d7a31483ecd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:49:36 np0005475493 systemd[1]: libpod-conmon-0d304d740120f68b084273dd917ac36c3a54b575d9b8f56b73226d7a31483ecd.scope: Deactivated successfully.
Oct  8 05:49:36 np0005475493 python3.9[111687]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:49:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:49:36.954Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:49:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:49:36.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:49:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:37 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:49:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:37.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:49:37 np0005475493 podman[111808]: 2025-10-08 09:49:37.169910104 +0000 UTC m=+0.037656667 container create 77a85fcb34c7ecc84d19ef259baddc7afb442c4a2c4a324ee4a0effc8be79d38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_kare, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:49:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:37.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:37 np0005475493 systemd[1]: Started libpod-conmon-77a85fcb34c7ecc84d19ef259baddc7afb442c4a2c4a324ee4a0effc8be79d38.scope.
Oct  8 05:49:37 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:49:37 np0005475493 podman[111808]: 2025-10-08 09:49:37.241266401 +0000 UTC m=+0.109012984 container init 77a85fcb34c7ecc84d19ef259baddc7afb442c4a2c4a324ee4a0effc8be79d38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  8 05:49:37 np0005475493 podman[111808]: 2025-10-08 09:49:37.247960245 +0000 UTC m=+0.115706808 container start 77a85fcb34c7ecc84d19ef259baddc7afb442c4a2c4a324ee4a0effc8be79d38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_kare, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  8 05:49:37 np0005475493 podman[111808]: 2025-10-08 09:49:37.155252655 +0000 UTC m=+0.022999228 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:49:37 np0005475493 podman[111808]: 2025-10-08 09:49:37.251576585 +0000 UTC m=+0.119323158 container attach 77a85fcb34c7ecc84d19ef259baddc7afb442c4a2c4a324ee4a0effc8be79d38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_kare, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:49:37 np0005475493 elated_kare[111825]: 167 167
Oct  8 05:49:37 np0005475493 systemd[1]: libpod-77a85fcb34c7ecc84d19ef259baddc7afb442c4a2c4a324ee4a0effc8be79d38.scope: Deactivated successfully.
Oct  8 05:49:37 np0005475493 podman[111808]: 2025-10-08 09:49:37.253266812 +0000 UTC m=+0.121013375 container died 77a85fcb34c7ecc84d19ef259baddc7afb442c4a2c4a324ee4a0effc8be79d38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_kare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:49:37 np0005475493 systemd[1]: var-lib-containers-storage-overlay-57bee7339850195e612f0151499cea7e6813b7873531f7edf4bc47127e9b0cba-merged.mount: Deactivated successfully.
Oct  8 05:49:37 np0005475493 podman[111808]: 2025-10-08 09:49:37.287891026 +0000 UTC m=+0.155637589 container remove 77a85fcb34c7ecc84d19ef259baddc7afb442c4a2c4a324ee4a0effc8be79d38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_kare, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  8 05:49:37 np0005475493 systemd[1]: libpod-conmon-77a85fcb34c7ecc84d19ef259baddc7afb442c4a2c4a324ee4a0effc8be79d38.scope: Deactivated successfully.
Oct  8 05:49:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:37 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6628004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:37 np0005475493 podman[111900]: 2025-10-08 09:49:37.436019073 +0000 UTC m=+0.037188581 container create ac4635ca5ce2fb4b9e08cbe618cf2916decf7c755de775b95e5201739f8326a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_haibt, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  8 05:49:37 np0005475493 systemd[1]: Started libpod-conmon-ac4635ca5ce2fb4b9e08cbe618cf2916decf7c755de775b95e5201739f8326a3.scope.
Oct  8 05:49:37 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:49:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c264be6aebdf102299af0ab2ff4d6294255853f11ad1fa3752a465b2b56cc7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:49:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c264be6aebdf102299af0ab2ff4d6294255853f11ad1fa3752a465b2b56cc7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:49:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c264be6aebdf102299af0ab2ff4d6294255853f11ad1fa3752a465b2b56cc7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:49:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c264be6aebdf102299af0ab2ff4d6294255853f11ad1fa3752a465b2b56cc7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:49:37 np0005475493 podman[111900]: 2025-10-08 09:49:37.504503005 +0000 UTC m=+0.105672533 container init ac4635ca5ce2fb4b9e08cbe618cf2916decf7c755de775b95e5201739f8326a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_haibt, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:49:37 np0005475493 podman[111900]: 2025-10-08 09:49:37.512209982 +0000 UTC m=+0.113379490 container start ac4635ca5ce2fb4b9e08cbe618cf2916decf7c755de775b95e5201739f8326a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Oct  8 05:49:37 np0005475493 podman[111900]: 2025-10-08 09:49:37.51514764 +0000 UTC m=+0.116317178 container attach ac4635ca5ce2fb4b9e08cbe618cf2916decf7c755de775b95e5201739f8326a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_haibt, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:49:37 np0005475493 podman[111900]: 2025-10-08 09:49:37.420483615 +0000 UTC m=+0.021653143 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:49:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:37 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6614001ef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:37 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v124: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 215 B/s rd, 0 op/s; 23 B/s, 0 objects/s recovering
Oct  8 05:49:38 np0005475493 lvm[112068]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 05:49:38 np0005475493 lvm[112068]: VG ceph_vg0 finished
Oct  8 05:49:38 np0005475493 python3.9[112037]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:49:38 np0005475493 frosty_haibt[111917]: {}
Oct  8 05:49:38 np0005475493 systemd[1]: libpod-ac4635ca5ce2fb4b9e08cbe618cf2916decf7c755de775b95e5201739f8326a3.scope: Deactivated successfully.
Oct  8 05:49:38 np0005475493 podman[111900]: 2025-10-08 09:49:38.216579179 +0000 UTC m=+0.817748697 container died ac4635ca5ce2fb4b9e08cbe618cf2916decf7c755de775b95e5201739f8326a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_haibt, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  8 05:49:38 np0005475493 systemd[1]: libpod-ac4635ca5ce2fb4b9e08cbe618cf2916decf7c755de775b95e5201739f8326a3.scope: Consumed 1.044s CPU time.
Oct  8 05:49:38 np0005475493 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct  8 05:49:38 np0005475493 systemd[1]: var-lib-containers-storage-overlay-6c264be6aebdf102299af0ab2ff4d6294255853f11ad1fa3752a465b2b56cc7d-merged.mount: Deactivated successfully.
Oct  8 05:49:38 np0005475493 podman[111900]: 2025-10-08 09:49:38.282573959 +0000 UTC m=+0.883743467 container remove ac4635ca5ce2fb4b9e08cbe618cf2916decf7c755de775b95e5201739f8326a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:49:38 np0005475493 systemd[1]: libpod-conmon-ac4635ca5ce2fb4b9e08cbe618cf2916decf7c755de775b95e5201739f8326a3.scope: Deactivated successfully.
Oct  8 05:49:38 np0005475493 systemd[1]: tuned.service: Deactivated successfully.
Oct  8 05:49:38 np0005475493 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct  8 05:49:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:49:38 np0005475493 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  8 05:49:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:49:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:49:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:49:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:49:38 np0005475493 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  8 05:49:39 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:49:39 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:49:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:39 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:49:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:39.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:49:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:39.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:39 np0005475493 python3.9[112267]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct  8 05:49:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:39 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:39 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f8003860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:39 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v125: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s; 18 B/s, 0 objects/s recovering
Oct  8 05:49:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Oct  8 05:49:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct  8 05:49:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Oct  8 05:49:40 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct  8 05:49:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct  8 05:49:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Oct  8 05:49:40 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Oct  8 05:49:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=infra.usagestats t=2025-10-08T09:49:40.442675945Z level=info msg="Usage stats are ready to report"
Oct  8 05:49:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:40 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:49:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:41 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66280042b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:49:41 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct  8 05:49:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:41.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:41.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[96168]: 08/10/2025 09:49:41 : epoch 68e63303 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c003630 fd 49 proxy ignored for local
Oct  8 05:49:41 np0005475493 kernel: ganesha.nfsd[107069]: segfault at 50 ip 00007f66db14e32e sp 00007f669cff8210 error 4 in libntirpc.so.5.8[7f66db133000+2c000] likely on CPU 6 (core 0, socket 6)
Oct  8 05:49:41 np0005475493 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct  8 05:49:41 np0005475493 systemd[1]: Created slice Slice /system/systemd-coredump.
Oct  8 05:49:41 np0005475493 systemd[1]: Started Process Core Dump (PID 112320/UID 0).
Oct  8 05:49:41 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v127: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 916 B/s rd, 152 B/s wr, 1 op/s; 16 B/s, 0 objects/s recovering
Oct  8 05:49:41 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Oct  8 05:49:41 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct  8 05:49:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Oct  8 05:49:42 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct  8 05:49:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct  8 05:49:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Oct  8 05:49:42 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Oct  8 05:49:42 np0005475493 systemd-coredump[112321]: Process 96172 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 64:#012#0  0x00007f66db14e32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct  8 05:49:42 np0005475493 python3.9[112450]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:49:42 np0005475493 systemd[1]: systemd-coredump@0-112320-0.service: Deactivated successfully.
Oct  8 05:49:42 np0005475493 systemd[1]: systemd-coredump@0-112320-0.service: Consumed 1.171s CPU time.
Oct  8 05:49:42 np0005475493 podman[112456]: 2025-10-08 09:49:42.649971495 +0000 UTC m=+0.034335196 container died c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct  8 05:49:42 np0005475493 systemd[1]: var-lib-containers-storage-overlay-f542fbc76345914e50b0a692320404ddade2bba14cf57cdbb4a6cefc867b9d7e-merged.mount: Deactivated successfully.
Oct  8 05:49:42 np0005475493 podman[112456]: 2025-10-08 09:49:42.759439753 +0000 UTC m=+0.143803404 container remove c5f79ead609d5155b918e60a1470581982785e4aac0e1c185a19093b2bdf84dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  8 05:49:42 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Main process exited, code=exited, status=139/n/a
Oct  8 05:49:42 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Failed with result 'exit-code'.
Oct  8 05:49:42 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.770s CPU time.
Oct  8 05:49:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:43.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:43 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct  8 05:49:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:43.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:43 np0005475493 python3.9[112652]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:49:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:49:43 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v129: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 933 B/s wr, 2 op/s; 14 B/s, 0 objects/s recovering
Oct  8 05:49:44 np0005475493 systemd-logind[798]: Session 40 logged out. Waiting for processes to exit.
Oct  8 05:49:44 np0005475493 systemd[1]: session-40.scope: Deactivated successfully.
Oct  8 05:49:44 np0005475493 systemd[1]: session-40.scope: Consumed 1min 3.340s CPU time.
Oct  8 05:49:44 np0005475493 systemd-logind[798]: Removed session 40.
Oct  8 05:49:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Oct  8 05:49:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Oct  8 05:49:44 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Oct  8 05:49:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:49:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:45.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:49:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:45.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:45 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Oct  8 05:49:45 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Oct  8 05:49:45 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Oct  8 05:49:45 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v132: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1.1 KiB/s wr, 2 op/s
Oct  8 05:49:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:49:45] "GET /metrics HTTP/1.1" 200 48250 "" "Prometheus/2.51.0"
Oct  8 05:49:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:49:45] "GET /metrics HTTP/1.1" 200 48250 "" "Prometheus/2.51.0"
Oct  8 05:49:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Oct  8 05:49:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Oct  8 05:49:46 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Oct  8 05:49:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:49:46.956Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:49:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:49:46.956Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:49:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:49:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:47.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:49:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:47.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Oct  8 05:49:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Oct  8 05:49:47 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Oct  8 05:49:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/094947 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:49:47
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Some PGs (0.002833) are unknown; try again later
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v135: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 05:49:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:49:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:49:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:49:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 05:49:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:49:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:49:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:49:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:49:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:49:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:49:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:49:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:49:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:49:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:49:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:49.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:49:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:49:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:49.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:49:49 np0005475493 systemd-logind[798]: New session 41 of user zuul.
Oct  8 05:49:49 np0005475493 systemd[1]: Started Session 41 of User zuul.
Oct  8 05:49:49 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 758 B/s wr, 2 op/s; 40 B/s, 0 objects/s recovering
Oct  8 05:49:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Oct  8 05:49:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct  8 05:49:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Oct  8 05:49:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct  8 05:49:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Oct  8 05:49:49 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct  8 05:49:49 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Oct  8 05:49:50 np0005475493 python3.9[112839]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:49:50 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct  8 05:49:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:51.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:51.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:51 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 143 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=74/74 les/c/f=75/75/0 sis=143) [1] r=0 lpr=143 pi=[74,143)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:49:51 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s; 36 B/s, 0 objects/s recovering
Oct  8 05:49:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  8 05:49:51 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  8 05:49:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Oct  8 05:49:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/094952 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 05:49:52 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  8 05:49:52 np0005475493 python3.9[112997]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct  8 05:49:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  8 05:49:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Oct  8 05:49:52 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Oct  8 05:49:52 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 144 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=74/74 les/c/f=75/75/0 sis=144) [1]/[0] r=-1 lpr=144 pi=[74,144)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:49:52 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 144 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=74/74 les/c/f=75/75/0 sis=144) [1]/[0] r=-1 lpr=144 pi=[74,144)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  8 05:49:52 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 144 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=98/98 les/c/f=99/99/0 sis=144) [1] r=0 lpr=144 pi=[98,144)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:49:53 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Scheduled restart job, restart counter is at 1.
Oct  8 05:49:53 np0005475493 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:49:53 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.770s CPU time.
Oct  8 05:49:53 np0005475493 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:49:53 np0005475493 python3.9[113150]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  8 05:49:53 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  8 05:49:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:49:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:53.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:49:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:49:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:53.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:49:53 np0005475493 podman[113209]: 2025-10-08 09:49:53.279267221 +0000 UTC m=+0.039430764 container create beaf974db496741c669ad81c891163d1307f5a165761c4e5b55f8bdeca674ecc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct  8 05:49:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Oct  8 05:49:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Oct  8 05:49:53 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583cf0aaac57cbf31d8d3c04dca1e57b2ad26a16a92039925dd8c3b62b820860/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  8 05:49:53 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583cf0aaac57cbf31d8d3c04dca1e57b2ad26a16a92039925dd8c3b62b820860/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:49:53 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583cf0aaac57cbf31d8d3c04dca1e57b2ad26a16a92039925dd8c3b62b820860/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:49:53 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 145 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=98/98 les/c/f=99/99/0 sis=145) [1]/[0] r=-1 lpr=145 pi=[98,145)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:49:53 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 145 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=98/98 les/c/f=99/99/0 sis=145) [1]/[0] r=-1 lpr=145 pi=[98,145)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  8 05:49:53 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Oct  8 05:49:53 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583cf0aaac57cbf31d8d3c04dca1e57b2ad26a16a92039925dd8c3b62b820860/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.uynkmx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:49:53 np0005475493 podman[113209]: 2025-10-08 09:49:53.347666742 +0000 UTC m=+0.107830305 container init beaf974db496741c669ad81c891163d1307f5a165761c4e5b55f8bdeca674ecc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  8 05:49:53 np0005475493 podman[113209]: 2025-10-08 09:49:53.260814937 +0000 UTC m=+0.020978490 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:49:53 np0005475493 podman[113209]: 2025-10-08 09:49:53.356087722 +0000 UTC m=+0.116251245 container start beaf974db496741c669ad81c891163d1307f5a165761c4e5b55f8bdeca674ecc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct  8 05:49:53 np0005475493 bash[113209]: beaf974db496741c669ad81c891163d1307f5a165761c4e5b55f8bdeca674ecc
Oct  8 05:49:53 np0005475493 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:49:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:49:53 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  8 05:49:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:49:53 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  8 05:49:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:49:53 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  8 05:49:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:49:53 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  8 05:49:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:49:53 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  8 05:49:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:49:53 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  8 05:49:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:49:53 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  8 05:49:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:49:53 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:49:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:49:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Oct  8 05:49:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Oct  8 05:49:53 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Oct  8 05:49:53 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 146 pg[9.1e( v 45'1018 (0'0,45'1018] local-lis/les=0/0 n=5 ec=54/38 lis/c=144/74 les/c/f=145/75/0 sis=146) [1] r=0 lpr=146 pi=[74,146)/1 luod=0'0 crt=45'1018 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:49:53 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 146 pg[9.1e( v 45'1018 (0'0,45'1018] local-lis/les=0/0 n=5 ec=54/38 lis/c=144/74 les/c/f=145/75/0 sis=146) [1] r=0 lpr=146 pi=[74,146)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:49:53 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v142: 353 pgs: 1 active+remapped, 1 peering, 351 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s; 27 B/s, 2 objects/s recovering
Oct  8 05:49:54 np0005475493 python3.9[113342]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  8 05:49:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Oct  8 05:49:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Oct  8 05:49:54 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Oct  8 05:49:54 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 147 pg[9.1f( v 45'1018 (0'0,45'1018] local-lis/les=0/0 n=5 ec=54/38 lis/c=145/98 les/c/f=146/99/0 sis=147) [1] r=0 lpr=147 pi=[98,147)/1 luod=0'0 crt=45'1018 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  8 05:49:54 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 147 pg[9.1f( v 45'1018 (0'0,45'1018] local-lis/les=0/0 n=5 ec=54/38 lis/c=145/98 les/c/f=146/99/0 sis=147) [1] r=0 lpr=147 pi=[98,147)/1 crt=45'1018 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  8 05:49:54 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 147 pg[9.1e( v 45'1018 (0'0,45'1018] local-lis/les=146/147 n=5 ec=54/38 lis/c=144/74 les/c/f=145/75/0 sis=146) [1] r=0 lpr=146 pi=[74,146)/1 crt=45'1018 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:49:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:55.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:49:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:55.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:49:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Oct  8 05:49:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Oct  8 05:49:55 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Oct  8 05:49:55 np0005475493 ceph-osd[81751]: osd.1 pg_epoch: 148 pg[9.1f( v 45'1018 (0'0,45'1018] local-lis/les=147/148 n=5 ec=54/38 lis/c=145/98 les/c/f=146/99/0 sis=147) [1] r=0 lpr=147 pi=[98,147)/1 crt=45'1018 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  8 05:49:55 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v145: 353 pgs: 1 active+remapped, 1 peering, 351 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 305 B/s wr, 2 op/s; 32 B/s, 2 objects/s recovering
Oct  8 05:49:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:49:55] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct  8 05:49:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:49:55] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct  8 05:49:56 np0005475493 python3.9[113498]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  8 05:49:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:49:56.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:49:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:57.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:57.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:57 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v146: 353 pgs: 1 active+remapped, 1 peering, 351 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 235 B/s wr, 2 op/s; 25 B/s, 2 objects/s recovering
Oct  8 05:49:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:49:58 np0005475493 python3.9[113653]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  8 05:49:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:49:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:49:59.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:49:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:49:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:49:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:49:59.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:49:59 np0005475493 python3.9[113807]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:49:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:49:59 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 05:49:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:49:59 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:49:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:49:59 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 05:49:59 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 822 B/s wr, 2 op/s; 17 B/s, 1 objects/s recovering
Oct  8 05:50:00 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct  8 05:50:00 np0005475493 python3.9[113960]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct  8 05:50:00 np0005475493 ceph-mon[73572]: overall HEALTH_OK
Oct  8 05:50:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:01.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:01.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:01 np0005475493 python3.9[114136]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:50:01 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 511 B/s wr, 1 op/s
Oct  8 05:50:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095002 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 05:50:02 np0005475493 python3.9[114295]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  8 05:50:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:50:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:50:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:03.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:03.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:50:03 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 444 B/s wr, 1 op/s
Oct  8 05:50:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:03 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:50:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:03 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 05:50:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:03 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:50:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:04 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 05:50:04 np0005475493 python3.9[114450]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:50:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:05.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:05.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:05 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 402 B/s wr, 1 op/s
Oct  8 05:50:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:50:05] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct  8 05:50:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:50:05] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct  8 05:50:06 np0005475493 python3.9[114739]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  8 05:50:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:50:06.958Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:50:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:50:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:07.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:50:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:07.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:07 np0005475493 python3.9[114890]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:50:07 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Oct  8 05:50:08 np0005475493 python3.9[115045]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  8 05:50:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:08 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:50:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:08 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 05:50:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:08 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:50:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:50:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:09.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:09.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:09 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 682 B/s wr, 2 op/s
Oct  8 05:50:10 np0005475493 python3.9[115200]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  8 05:50:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:11.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:11.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:11 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 2 op/s
Oct  8 05:50:12 np0005475493 python3.9[115355]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:50:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:13.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:13.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:13 np0005475493 python3.9[115510]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct  8 05:50:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:50:13 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 597 B/s wr, 2 op/s
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  8 05:50:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:14 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:50:14 np0005475493 systemd[1]: session-41.scope: Deactivated successfully.
Oct  8 05:50:14 np0005475493 systemd[1]: session-41.scope: Consumed 17.287s CPU time.
Oct  8 05:50:14 np0005475493 systemd-logind[798]: Session 41 logged out. Waiting for processes to exit.
Oct  8 05:50:14 np0005475493 systemd-logind[798]: Removed session 41.
Oct  8 05:50:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:15 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59d4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:50:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:15.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:50:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:15.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:15 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59c8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:15 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59b0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:15 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Oct  8 05:50:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:50:15] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct  8 05:50:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:50:15] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct  8 05:50:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:50:16.958Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:50:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:17 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ac000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:50:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:17.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:50:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:17.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095017 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 05:50:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:17 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59b8000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:17 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 05:50:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:17 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:50:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:17 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:17 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Oct  8 05:50:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:50:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:50:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:50:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:50:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:50:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:50:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:50:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:50:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:50:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:19 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59b8000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:50:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:19.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:50:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:19.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:19 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59c8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:19 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ac0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:19 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Oct  8 05:50:19 np0005475493 systemd-logind[798]: New session 42 of user zuul.
Oct  8 05:50:19 np0005475493 systemd[1]: Started Session 42 of User zuul.
Oct  8 05:50:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:20 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 05:50:20 np0005475493 python3.9[115710]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:50:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:21 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ac0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:21.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:21.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:21 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59b8001f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:21 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59c8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:21 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 05:50:21 np0005475493 python3.9[115890]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  8 05:50:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:23 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ac0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:50:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:23.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:50:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:23.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:23 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ac0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:50:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:23 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59b8001f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:23 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 05:50:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095024 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 05:50:24 np0005475493 python3.9[116086]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:50:24 np0005475493 systemd[1]: session-42.scope: Deactivated successfully.
Oct  8 05:50:24 np0005475493 systemd[1]: session-42.scope: Consumed 2.311s CPU time.
Oct  8 05:50:24 np0005475493 systemd-logind[798]: Session 42 logged out. Waiting for processes to exit.
Oct  8 05:50:24 np0005475493 systemd-logind[798]: Removed session 42.
Oct  8 05:50:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:25 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59c8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:25.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:50:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:25.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:50:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:25 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ac0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:25 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59b0002160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:25 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct  8 05:50:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:50:25] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Oct  8 05:50:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:50:25] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Oct  8 05:50:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:50:26.960Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:50:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:27 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59b8001f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:50:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:27.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:50:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:27.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:27 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59c8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:27 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ac0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:27 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct  8 05:50:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:50:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:29 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59b0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:50:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:29.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:50:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:50:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:29.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:50:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:29 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59b8003340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:29 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59c8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:29 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct  8 05:50:29 np0005475493 systemd-logind[798]: New session 43 of user zuul.
Oct  8 05:50:29 np0005475493 systemd[1]: Started Session 43 of User zuul.
Oct  8 05:50:31 np0005475493 python3.9[116271]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:50:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:31 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59c8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:50:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:31.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:50:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:31.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:31 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59b0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:31 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59b8003340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:31 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct  8 05:50:31 np0005475493 python3.9[116426]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:50:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:50:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:50:32 np0005475493 python3.9[116583]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  8 05:50:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:33 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ac0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:33.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:33.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:33 np0005475493 kernel: ganesha.nfsd[115539]: segfault at 50 ip 00007f5a82ef932e sp 00007f5a4dffa210 error 4 in libntirpc.so.5.8[7f5a82ede000+2c000] likely on CPU 6 (core 0, socket 6)
Oct  8 05:50:33 np0005475493 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct  8 05:50:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[113224]: 08/10/2025 09:50:33 : epoch 68e633c1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59c8001c00 fd 39 proxy ignored for local
Oct  8 05:50:33 np0005475493 systemd[1]: Started Process Core Dump (PID 116624/UID 0).
Oct  8 05:50:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:50:33 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct  8 05:50:33 np0005475493 python3.9[116670]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  8 05:50:34 np0005475493 systemd-coredump[116640]: Process 113229 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 43:#012#0  0x00007f5a82ef932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct  8 05:50:34 np0005475493 systemd[1]: systemd-coredump@1-116624-0.service: Deactivated successfully.
Oct  8 05:50:34 np0005475493 systemd[1]: systemd-coredump@1-116624-0.service: Consumed 1.167s CPU time.
Oct  8 05:50:34 np0005475493 podman[116677]: 2025-10-08 09:50:34.66706905 +0000 UTC m=+0.031950634 container died beaf974db496741c669ad81c891163d1307f5a165761c4e5b55f8bdeca674ecc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:50:34 np0005475493 systemd[1]: var-lib-containers-storage-overlay-583cf0aaac57cbf31d8d3c04dca1e57b2ad26a16a92039925dd8c3b62b820860-merged.mount: Deactivated successfully.
Oct  8 05:50:34 np0005475493 podman[116677]: 2025-10-08 09:50:34.715115337 +0000 UTC m=+0.079996901 container remove beaf974db496741c669ad81c891163d1307f5a165761c4e5b55f8bdeca674ecc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  8 05:50:34 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Main process exited, code=exited, status=139/n/a
Oct  8 05:50:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095034 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 05:50:34 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Failed with result 'exit-code'.
Oct  8 05:50:34 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.397s CPU time.
Oct  8 05:50:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:35.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:35.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:35 np0005475493 python3.9[116872]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  8 05:50:35 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 05:50:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:50:35] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct  8 05:50:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:50:35] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct  8 05:50:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:50:36.960Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:50:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:50:36.961Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:50:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:50:36.961Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:50:37 np0005475493 python3.9[117068]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:50:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:50:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:37.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:50:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:37.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:37 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 05:50:37 np0005475493 python3.9[117221]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:50:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:50:38 np0005475493 python3.9[117387]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:50:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:39.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:39.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:39 np0005475493 python3.9[117559]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:50:39 np0005475493 podman[117587]: 2025-10-08 09:50:39.318366479 +0000 UTC m=+0.072962241 container exec 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:50:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095039 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 05:50:39 np0005475493 podman[117587]: 2025-10-08 09:50:39.436361349 +0000 UTC m=+0.190957091 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Oct  8 05:50:39 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 05:50:39 np0005475493 python3.9[117842]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:50:39 np0005475493 podman[117877]: 2025-10-08 09:50:39.951221978 +0000 UTC m=+0.055172131 container exec 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:50:39 np0005475493 podman[117877]: 2025-10-08 09:50:39.961463923 +0000 UTC m=+0.065414056 container exec_died 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:50:40 np0005475493 python3.9[118042]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:50:40 np0005475493 podman[118075]: 2025-10-08 09:50:40.417308985 +0000 UTC m=+0.087035720 container exec 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct  8 05:50:40 np0005475493 podman[118097]: 2025-10-08 09:50:40.489198071 +0000 UTC m=+0.053971032 container exec_died 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct  8 05:50:40 np0005475493 podman[118075]: 2025-10-08 09:50:40.495240968 +0000 UTC m=+0.164967703 container exec_died 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct  8 05:50:40 np0005475493 podman[118186]: 2025-10-08 09:50:40.69512257 +0000 UTC m=+0.052510105 container exec 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, distribution-scope=public, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.28.2, com.redhat.component=keepalived-container, architecture=x86_64, name=keepalived, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  8 05:50:40 np0005475493 podman[118186]: 2025-10-08 09:50:40.708833047 +0000 UTC m=+0.066220562 container exec_died 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, release=1793, description=keepalived for Ceph, distribution-scope=public, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.openshift.expose-services=, version=2.2.4, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, name=keepalived, com.redhat.component=keepalived-container)
Oct  8 05:50:40 np0005475493 podman[118307]: 2025-10-08 09:50:40.917553267 +0000 UTC m=+0.048209814 container exec feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:50:40 np0005475493 podman[118307]: 2025-10-08 09:50:40.957445849 +0000 UTC m=+0.088102376 container exec_died feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:50:41 np0005475493 podman[118451]: 2025-10-08 09:50:41.154592322 +0000 UTC m=+0.047831082 container exec 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:50:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:50:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:41.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:50:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:41.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:41 np0005475493 python3.9[118450]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:50:41 np0005475493 podman[118451]: 2025-10-08 09:50:41.317972242 +0000 UTC m=+0.211210982 container exec_died 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 05:50:41 np0005475493 podman[118665]: 2025-10-08 09:50:41.646982377 +0000 UTC m=+0.049536998 container exec 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:50:41 np0005475493 podman[118665]: 2025-10-08 09:50:41.680339925 +0000 UTC m=+0.082894526 container exec_died 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 05:50:41 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 05:50:41 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:50:41 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:50:41 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:50:41 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:50:41 np0005475493 python3.9[118745]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:50:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:50:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:50:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 05:50:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:50:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 191 B/s rd, 0 op/s
Oct  8 05:50:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 05:50:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:50:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 05:50:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:50:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 05:50:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 05:50:42 np0005475493 python3.9[118969]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:50:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 05:50:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:50:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:50:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:50:42 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:50:42 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:50:42 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:50:42 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:50:42 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:50:42 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:50:42 np0005475493 podman[119224]: 2025-10-08 09:50:42.930440312 +0000 UTC m=+0.037700150 container create f249b4ac27548f10b303b9c758b3c32dcaaafee80b40ce66a52d70eab2ebc4bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:50:42 np0005475493 systemd[1]: Started libpod-conmon-f249b4ac27548f10b303b9c758b3c32dcaaafee80b40ce66a52d70eab2ebc4bf.scope.
Oct  8 05:50:42 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:50:43 np0005475493 python3.9[119208]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:50:43 np0005475493 podman[119224]: 2025-10-08 09:50:42.91598323 +0000 UTC m=+0.023243068 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:50:43 np0005475493 podman[119224]: 2025-10-08 09:50:43.013071389 +0000 UTC m=+0.120331257 container init f249b4ac27548f10b303b9c758b3c32dcaaafee80b40ce66a52d70eab2ebc4bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_nobel, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:50:43 np0005475493 podman[119224]: 2025-10-08 09:50:43.020673657 +0000 UTC m=+0.127933495 container start f249b4ac27548f10b303b9c758b3c32dcaaafee80b40ce66a52d70eab2ebc4bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_nobel, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:50:43 np0005475493 podman[119224]: 2025-10-08 09:50:43.02446083 +0000 UTC m=+0.131720688 container attach f249b4ac27548f10b303b9c758b3c32dcaaafee80b40ce66a52d70eab2ebc4bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_nobel, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:50:43 np0005475493 distracted_nobel[119241]: 167 167
Oct  8 05:50:43 np0005475493 podman[119224]: 2025-10-08 09:50:43.025710111 +0000 UTC m=+0.132969949 container died f249b4ac27548f10b303b9c758b3c32dcaaafee80b40ce66a52d70eab2ebc4bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_nobel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:50:43 np0005475493 systemd[1]: libpod-f249b4ac27548f10b303b9c758b3c32dcaaafee80b40ce66a52d70eab2ebc4bf.scope: Deactivated successfully.
Oct  8 05:50:43 np0005475493 systemd[1]: var-lib-containers-storage-overlay-f23a8f66cf4354f56a707a3a7c52c4bbf1218d56f52dab93e3f9a6a4579083c0-merged.mount: Deactivated successfully.
Oct  8 05:50:43 np0005475493 ceph-mon[73572]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Oct  8 05:50:43 np0005475493 podman[119224]: 2025-10-08 09:50:43.060385312 +0000 UTC m=+0.167645150 container remove f249b4ac27548f10b303b9c758b3c32dcaaafee80b40ce66a52d70eab2ebc4bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_nobel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Oct  8 05:50:43 np0005475493 systemd[1]: libpod-conmon-f249b4ac27548f10b303b9c758b3c32dcaaafee80b40ce66a52d70eab2ebc4bf.scope: Deactivated successfully.
Oct  8 05:50:43 np0005475493 podman[119290]: 2025-10-08 09:50:43.193454844 +0000 UTC m=+0.033473823 container create 5775f780a40ad9c9a4a0d053543c503278333ba3caf77a7ac4e5de5d4ad22f8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feistel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:50:43 np0005475493 systemd[1]: Started libpod-conmon-5775f780a40ad9c9a4a0d053543c503278333ba3caf77a7ac4e5de5d4ad22f8a.scope.
Oct  8 05:50:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:43.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:43.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:43 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:50:43 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846ce7c2d28d4547188c2e652839b1d94587c822d700ad7af959906437a2f8be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:50:43 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846ce7c2d28d4547188c2e652839b1d94587c822d700ad7af959906437a2f8be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:50:43 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846ce7c2d28d4547188c2e652839b1d94587c822d700ad7af959906437a2f8be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:50:43 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846ce7c2d28d4547188c2e652839b1d94587c822d700ad7af959906437a2f8be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:50:43 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846ce7c2d28d4547188c2e652839b1d94587c822d700ad7af959906437a2f8be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:50:43 np0005475493 podman[119290]: 2025-10-08 09:50:43.179983864 +0000 UTC m=+0.020002883 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:50:43 np0005475493 podman[119290]: 2025-10-08 09:50:43.278769008 +0000 UTC m=+0.118788047 container init 5775f780a40ad9c9a4a0d053543c503278333ba3caf77a7ac4e5de5d4ad22f8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feistel, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  8 05:50:43 np0005475493 podman[119290]: 2025-10-08 09:50:43.292856457 +0000 UTC m=+0.132875456 container start 5775f780a40ad9c9a4a0d053543c503278333ba3caf77a7ac4e5de5d4ad22f8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct  8 05:50:43 np0005475493 podman[119290]: 2025-10-08 09:50:43.297444957 +0000 UTC m=+0.137463956 container attach 5775f780a40ad9c9a4a0d053543c503278333ba3caf77a7ac4e5de5d4ad22f8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feistel, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:50:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:50:43 np0005475493 upbeat_feistel[119306]: --> passed data devices: 0 physical, 1 LVM
Oct  8 05:50:43 np0005475493 upbeat_feistel[119306]: --> All data devices are unavailable
Oct  8 05:50:43 np0005475493 systemd[1]: libpod-5775f780a40ad9c9a4a0d053543c503278333ba3caf77a7ac4e5de5d4ad22f8a.scope: Deactivated successfully.
Oct  8 05:50:43 np0005475493 podman[119290]: 2025-10-08 09:50:43.617675896 +0000 UTC m=+0.457694905 container died 5775f780a40ad9c9a4a0d053543c503278333ba3caf77a7ac4e5de5d4ad22f8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:50:43 np0005475493 systemd[1]: var-lib-containers-storage-overlay-846ce7c2d28d4547188c2e652839b1d94587c822d700ad7af959906437a2f8be-merged.mount: Deactivated successfully.
Oct  8 05:50:43 np0005475493 podman[119290]: 2025-10-08 09:50:43.654555359 +0000 UTC m=+0.494574348 container remove 5775f780a40ad9c9a4a0d053543c503278333ba3caf77a7ac4e5de5d4ad22f8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feistel, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:50:43 np0005475493 systemd[1]: libpod-conmon-5775f780a40ad9c9a4a0d053543c503278333ba3caf77a7ac4e5de5d4ad22f8a.scope: Deactivated successfully.
Oct  8 05:50:43 np0005475493 ceph-mon[73572]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Oct  8 05:50:44 np0005475493 python3.9[119486]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  8 05:50:44 np0005475493 podman[119557]: 2025-10-08 09:50:44.17163332 +0000 UTC m=+0.043323985 container create 75c19f88a0e848edfbf08ee455feaf75ed229c04e4d67ccdc4c3337986cc65db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_driscoll, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:50:44 np0005475493 systemd[1]: Started libpod-conmon-75c19f88a0e848edfbf08ee455feaf75ed229c04e4d67ccdc4c3337986cc65db.scope.
Oct  8 05:50:44 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:50:44 np0005475493 podman[119557]: 2025-10-08 09:50:44.14681659 +0000 UTC m=+0.018507235 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:50:44 np0005475493 podman[119557]: 2025-10-08 09:50:44.261335946 +0000 UTC m=+0.133026681 container init 75c19f88a0e848edfbf08ee455feaf75ed229c04e4d67ccdc4c3337986cc65db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_driscoll, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:50:44 np0005475493 podman[119557]: 2025-10-08 09:50:44.27277296 +0000 UTC m=+0.144463585 container start 75c19f88a0e848edfbf08ee455feaf75ed229c04e4d67ccdc4c3337986cc65db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct  8 05:50:44 np0005475493 funny_driscoll[119573]: 167 167
Oct  8 05:50:44 np0005475493 systemd[1]: libpod-75c19f88a0e848edfbf08ee455feaf75ed229c04e4d67ccdc4c3337986cc65db.scope: Deactivated successfully.
Oct  8 05:50:44 np0005475493 podman[119557]: 2025-10-08 09:50:44.278374002 +0000 UTC m=+0.150064727 container attach 75c19f88a0e848edfbf08ee455feaf75ed229c04e4d67ccdc4c3337986cc65db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_driscoll, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  8 05:50:44 np0005475493 podman[119557]: 2025-10-08 09:50:44.278871369 +0000 UTC m=+0.150562034 container died 75c19f88a0e848edfbf08ee455feaf75ed229c04e4d67ccdc4c3337986cc65db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_driscoll, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:50:44 np0005475493 systemd[1]: var-lib-containers-storage-overlay-dc68ad8aac6aceda942fc122b0af17e0c3a716cafb683490721397e23be52085-merged.mount: Deactivated successfully.
Oct  8 05:50:44 np0005475493 podman[119557]: 2025-10-08 09:50:44.334212224 +0000 UTC m=+0.205902879 container remove 75c19f88a0e848edfbf08ee455feaf75ed229c04e4d67ccdc4c3337986cc65db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:50:44 np0005475493 systemd[1]: libpod-conmon-75c19f88a0e848edfbf08ee455feaf75ed229c04e4d67ccdc4c3337986cc65db.scope: Deactivated successfully.
Oct  8 05:50:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 861 B/s rd, 478 B/s wr, 1 op/s
Oct  8 05:50:44 np0005475493 podman[119599]: 2025-10-08 09:50:44.484646452 +0000 UTC m=+0.038039961 container create 2703d53e934088f1c3113c2faa7fac738a7e25ea4095e662d66493ca0bf579b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:50:44 np0005475493 systemd[1]: Started libpod-conmon-2703d53e934088f1c3113c2faa7fac738a7e25ea4095e662d66493ca0bf579b0.scope.
Oct  8 05:50:44 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:50:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bba69b4580df84d4cfb5601e077e71ac8112cbab8a0465842878f57f3ee725c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:50:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bba69b4580df84d4cfb5601e077e71ac8112cbab8a0465842878f57f3ee725c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:50:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bba69b4580df84d4cfb5601e077e71ac8112cbab8a0465842878f57f3ee725c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:50:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bba69b4580df84d4cfb5601e077e71ac8112cbab8a0465842878f57f3ee725c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:50:44 np0005475493 podman[119599]: 2025-10-08 09:50:44.548247607 +0000 UTC m=+0.101641166 container init 2703d53e934088f1c3113c2faa7fac738a7e25ea4095e662d66493ca0bf579b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:50:44 np0005475493 podman[119599]: 2025-10-08 09:50:44.554917155 +0000 UTC m=+0.108310664 container start 2703d53e934088f1c3113c2faa7fac738a7e25ea4095e662d66493ca0bf579b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:50:44 np0005475493 podman[119599]: 2025-10-08 09:50:44.557732687 +0000 UTC m=+0.111126236 container attach 2703d53e934088f1c3113c2faa7fac738a7e25ea4095e662d66493ca0bf579b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:50:44 np0005475493 podman[119599]: 2025-10-08 09:50:44.469128006 +0000 UTC m=+0.022521535 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]: {
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:    "1": [
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:        {
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:            "devices": [
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:                "/dev/loop3"
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:            ],
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:            "lv_name": "ceph_lv0",
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:            "lv_size": "21470642176",
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:            "name": "ceph_lv0",
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:            "tags": {
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:                "ceph.cephx_lockbox_secret": "",
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:                "ceph.cluster_name": "ceph",
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:                "ceph.crush_device_class": "",
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:                "ceph.encrypted": "0",
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:                "ceph.osd_id": "1",
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:                "ceph.type": "block",
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:                "ceph.vdo": "0",
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:                "ceph.with_tpm": "0"
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:            },
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:            "type": "block",
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:            "vg_name": "ceph_vg0"
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:        }
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]:    ]
Oct  8 05:50:44 np0005475493 loving_jepsen[119615]: }
Oct  8 05:50:44 np0005475493 systemd[1]: libpod-2703d53e934088f1c3113c2faa7fac738a7e25ea4095e662d66493ca0bf579b0.scope: Deactivated successfully.
Oct  8 05:50:44 np0005475493 podman[119599]: 2025-10-08 09:50:44.822255997 +0000 UTC m=+0.375649506 container died 2703d53e934088f1c3113c2faa7fac738a7e25ea4095e662d66493ca0bf579b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jepsen, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:50:44 np0005475493 systemd[1]: var-lib-containers-storage-overlay-3bba69b4580df84d4cfb5601e077e71ac8112cbab8a0465842878f57f3ee725c-merged.mount: Deactivated successfully.
Oct  8 05:50:44 np0005475493 podman[119599]: 2025-10-08 09:50:44.864284538 +0000 UTC m=+0.417678047 container remove 2703d53e934088f1c3113c2faa7fac738a7e25ea4095e662d66493ca0bf579b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:50:44 np0005475493 systemd[1]: libpod-conmon-2703d53e934088f1c3113c2faa7fac738a7e25ea4095e662d66493ca0bf579b0.scope: Deactivated successfully.
Oct  8 05:50:44 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Scheduled restart job, restart counter is at 2.
Oct  8 05:50:44 np0005475493 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:50:44 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.397s CPU time.
Oct  8 05:50:44 np0005475493 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:50:45 np0005475493 podman[119729]: 2025-10-08 09:50:45.099942028 +0000 UTC m=+0.043272013 container create 5648b6991b3670625e89da113426ec69b90cf4710ec8879fe91ecbad4e23ac94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  8 05:50:45 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db3a225b971325494d9fd29d607fb50df99f9768861ae0ade871ec413a763e24/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  8 05:50:45 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db3a225b971325494d9fd29d607fb50df99f9768861ae0ade871ec413a763e24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:50:45 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db3a225b971325494d9fd29d607fb50df99f9768861ae0ade871ec413a763e24/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:50:45 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db3a225b971325494d9fd29d607fb50df99f9768861ae0ade871ec413a763e24/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.uynkmx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:50:45 np0005475493 podman[119729]: 2025-10-08 09:50:45.156578576 +0000 UTC m=+0.099908581 container init 5648b6991b3670625e89da113426ec69b90cf4710ec8879fe91ecbad4e23ac94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct  8 05:50:45 np0005475493 podman[119729]: 2025-10-08 09:50:45.161144314 +0000 UTC m=+0.104474309 container start 5648b6991b3670625e89da113426ec69b90cf4710ec8879fe91ecbad4e23ac94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:50:45 np0005475493 bash[119729]: 5648b6991b3670625e89da113426ec69b90cf4710ec8879fe91ecbad4e23ac94
Oct  8 05:50:45 np0005475493 podman[119729]: 2025-10-08 09:50:45.082923342 +0000 UTC m=+0.026253357 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:50:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  8 05:50:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  8 05:50:45 np0005475493 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:50:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  8 05:50:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  8 05:50:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  8 05:50:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  8 05:50:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  8 05:50:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:50:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:45.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:50:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:45.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:50:45 np0005475493 podman[119851]: 2025-10-08 09:50:45.411762372 +0000 UTC m=+0.042378074 container create 7522981c35280f86244c05d6b63d4db5c99a112986e074f931f8798768ffdbd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bardeen, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS)
Oct  8 05:50:45 np0005475493 systemd[1]: Started libpod-conmon-7522981c35280f86244c05d6b63d4db5c99a112986e074f931f8798768ffdbd6.scope.
Oct  8 05:50:45 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:50:45 np0005475493 podman[119851]: 2025-10-08 09:50:45.391559712 +0000 UTC m=+0.022175464 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:50:45 np0005475493 podman[119851]: 2025-10-08 09:50:45.502574754 +0000 UTC m=+0.133190546 container init 7522981c35280f86244c05d6b63d4db5c99a112986e074f931f8798768ffdbd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bardeen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct  8 05:50:45 np0005475493 podman[119851]: 2025-10-08 09:50:45.514724471 +0000 UTC m=+0.145340203 container start 7522981c35280f86244c05d6b63d4db5c99a112986e074f931f8798768ffdbd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bardeen, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct  8 05:50:45 np0005475493 podman[119851]: 2025-10-08 09:50:45.519259399 +0000 UTC m=+0.149875141 container attach 7522981c35280f86244c05d6b63d4db5c99a112986e074f931f8798768ffdbd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bardeen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:50:45 np0005475493 dazzling_bardeen[119867]: 167 167
Oct  8 05:50:45 np0005475493 systemd[1]: libpod-7522981c35280f86244c05d6b63d4db5c99a112986e074f931f8798768ffdbd6.scope: Deactivated successfully.
Oct  8 05:50:45 np0005475493 conmon[119867]: conmon 7522981c35280f86244c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7522981c35280f86244c05d6b63d4db5c99a112986e074f931f8798768ffdbd6.scope/container/memory.events
Oct  8 05:50:45 np0005475493 podman[119851]: 2025-10-08 09:50:45.525450551 +0000 UTC m=+0.156066283 container died 7522981c35280f86244c05d6b63d4db5c99a112986e074f931f8798768ffdbd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bardeen, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct  8 05:50:45 np0005475493 systemd[1]: var-lib-containers-storage-overlay-2adc613144e9b746934f61272d7f4b1e866dd681cc55db2b2881434e2506b4b9-merged.mount: Deactivated successfully.
Oct  8 05:50:45 np0005475493 podman[119851]: 2025-10-08 09:50:45.58180346 +0000 UTC m=+0.212419162 container remove 7522981c35280f86244c05d6b63d4db5c99a112986e074f931f8798768ffdbd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bardeen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:50:45 np0005475493 systemd[1]: libpod-conmon-7522981c35280f86244c05d6b63d4db5c99a112986e074f931f8798768ffdbd6.scope: Deactivated successfully.
Oct  8 05:50:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:50:45] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct  8 05:50:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:50:45] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct  8 05:50:45 np0005475493 podman[119893]: 2025-10-08 09:50:45.741568962 +0000 UTC m=+0.039731807 container create ccf223cea8a4c4b0cc52eabb94d17fb678578c2f6d2765b6224fa976286c177d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_poitras, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:50:45 np0005475493 systemd[1]: Started libpod-conmon-ccf223cea8a4c4b0cc52eabb94d17fb678578c2f6d2765b6224fa976286c177d.scope.
Oct  8 05:50:45 np0005475493 podman[119893]: 2025-10-08 09:50:45.723005407 +0000 UTC m=+0.021168252 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:50:45 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:50:45 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/429ae51c20707ef2c835d53afac7b723f1b2216a4cd35f51d2d7270d506f6292/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:50:45 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/429ae51c20707ef2c835d53afac7b723f1b2216a4cd35f51d2d7270d506f6292/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:50:45 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/429ae51c20707ef2c835d53afac7b723f1b2216a4cd35f51d2d7270d506f6292/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:50:45 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/429ae51c20707ef2c835d53afac7b723f1b2216a4cd35f51d2d7270d506f6292/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:50:45 np0005475493 podman[119893]: 2025-10-08 09:50:45.857477125 +0000 UTC m=+0.155639970 container init ccf223cea8a4c4b0cc52eabb94d17fb678578c2f6d2765b6224fa976286c177d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  8 05:50:45 np0005475493 podman[119893]: 2025-10-08 09:50:45.865788366 +0000 UTC m=+0.163951231 container start ccf223cea8a4c4b0cc52eabb94d17fb678578c2f6d2765b6224fa976286c177d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_poitras, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:50:45 np0005475493 podman[119893]: 2025-10-08 09:50:45.869683802 +0000 UTC m=+0.167846647 container attach ccf223cea8a4c4b0cc52eabb94d17fb678578c2f6d2765b6224fa976286c177d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  8 05:50:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v171: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 861 B/s rd, 478 B/s wr, 1 op/s
Oct  8 05:50:46 np0005475493 lvm[120113]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 05:50:46 np0005475493 lvm[120113]: VG ceph_vg0 finished
Oct  8 05:50:46 np0005475493 serene_poitras[119909]: {}
Oct  8 05:50:46 np0005475493 systemd[1]: libpod-ccf223cea8a4c4b0cc52eabb94d17fb678578c2f6d2765b6224fa976286c177d.scope: Deactivated successfully.
Oct  8 05:50:46 np0005475493 systemd[1]: libpod-ccf223cea8a4c4b0cc52eabb94d17fb678578c2f6d2765b6224fa976286c177d.scope: Consumed 1.090s CPU time.
Oct  8 05:50:46 np0005475493 podman[119893]: 2025-10-08 09:50:46.551309362 +0000 UTC m=+0.849472207 container died ccf223cea8a4c4b0cc52eabb94d17fb678578c2f6d2765b6224fa976286c177d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_poitras, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:50:46 np0005475493 systemd[1]: var-lib-containers-storage-overlay-429ae51c20707ef2c835d53afac7b723f1b2216a4cd35f51d2d7270d506f6292-merged.mount: Deactivated successfully.
Oct  8 05:50:46 np0005475493 podman[119893]: 2025-10-08 09:50:46.594393468 +0000 UTC m=+0.892556313 container remove ccf223cea8a4c4b0cc52eabb94d17fb678578c2f6d2765b6224fa976286c177d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_poitras, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:50:46 np0005475493 python3.9[120095]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:50:46 np0005475493 systemd[1]: libpod-conmon-ccf223cea8a4c4b0cc52eabb94d17fb678578c2f6d2765b6224fa976286c177d.scope: Deactivated successfully.
Oct  8 05:50:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:50:46 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:50:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:50:46 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:50:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:50:46.962Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:50:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:50:46.963Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:50:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:47.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:50:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:47.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:50:47 np0005475493 python3.9[120308]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:50:47
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', '.nfs', 'default.rgw.control', 'volumes', 'images', 'default.rgw.log', 'backups', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'vms']
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 05:50:47 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:50:47 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 05:50:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Oct  8 05:50:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:50:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:50:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:50:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:50:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 05:50:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:50:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:50:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:50:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:50:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:50:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:50:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:50:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:50:48 np0005475493 python3.9[120461]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:50:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v172: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 861 B/s rd, 478 B/s wr, 1 op/s
Oct  8 05:50:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:50:48 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:50:49 np0005475493 python3.9[120613]: ansible-service_facts Invoked
Oct  8 05:50:49 np0005475493 network[120631]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  8 05:50:49 np0005475493 network[120632]: 'network-scripts' will be removed from distribution in near future.
Oct  8 05:50:49 np0005475493 network[120633]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  8 05:50:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:50:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:49.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:50:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:49.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:50 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v173: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.0 KiB/s wr, 3 op/s
Oct  8 05:50:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:50:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:51.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:50:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:51.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct  8 05:50:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct  8 05:50:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 05:50:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:50:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 05:50:52 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v174: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.0 KiB/s wr, 3 op/s
Oct  8 05:50:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:52 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:50:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:52 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 05:50:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:52 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:50:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:52 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 05:50:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:52 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:50:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:52 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 05:50:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:52 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:50:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:50:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:53.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:50:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:53.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:50:54 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v175: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Oct  8 05:50:54 np0005475493 python3.9[121093]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  8 05:50:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095054 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 05:50:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:50:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:55.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:50:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:55.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:50:55] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 05:50:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:50:55] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 05:50:56 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v176: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Oct  8 05:50:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:50:56.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:50:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da0d5d0 =====
Oct  8 05:50:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da0d5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:57 np0005475493 radosgw[88577]: beast: 0x7f162da0d5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:57.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:57.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:57 np0005475493 python3.9[121249]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct  8 05:50:58 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v177: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Oct  8 05:50:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-000000000000000a:nfs.cephfs.2: -2
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct  8 05:50:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:58 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  8 05:50:58 np0005475493 python3.9[121414]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:50:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:59 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:50:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:50:59.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:50:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:50:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:50:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:50:59.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:50:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:59 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:50:59 np0005475493 python3.9[121496]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:50:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:50:59 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:00 np0005475493 python3.9[121649]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:51:00 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v178: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Oct  8 05:51:00 np0005475493 python3.9[121727]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:51:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:01 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec0016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:01.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:01.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095101 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 05:51:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:01 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:01 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:02 np0005475493 python3.9[121906]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:51:02 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v179: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 852 B/s wr, 3 op/s
Oct  8 05:51:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:51:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:51:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:03 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:51:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:03.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:51:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:51:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:03.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:51:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:03 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:51:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:03 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:03 np0005475493 python3.9[122059]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  8 05:51:04 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v180: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 852 B/s wr, 3 op/s
Oct  8 05:51:04 np0005475493 python3.9[122144]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:51:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:05 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:05.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:05.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:05 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:05 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:51:05] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Oct  8 05:51:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:51:05] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Oct  8 05:51:05 np0005475493 systemd[1]: session-43.scope: Deactivated successfully.
Oct  8 05:51:05 np0005475493 systemd[1]: session-43.scope: Consumed 23.156s CPU time.
Oct  8 05:51:05 np0005475493 systemd-logind[798]: Session 43 logged out. Waiting for processes to exit.
Oct  8 05:51:05 np0005475493 systemd-logind[798]: Removed session 43.
Oct  8 05:51:06 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v181: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct  8 05:51:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095106 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 05:51:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:51:06.965Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:51:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:51:06.966Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:51:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:07 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:07.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:51:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:07.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:51:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:07 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:07 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:08 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v182: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct  8 05:51:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:51:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:09 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:51:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:09.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:51:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:09.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:09 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:09 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:10 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v183: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Oct  8 05:51:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:11 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:51:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:11.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:51:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:11.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:11 np0005475493 systemd-logind[798]: New session 44 of user zuul.
Oct  8 05:51:11 np0005475493 systemd[1]: Started Session 44 of User zuul.
Oct  8 05:51:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:11 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:11 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:12 np0005475493 python3.9[122334]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:51:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v184: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 05:51:13 np0005475493 python3.9[122486]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:51:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:13 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:51:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:13.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:51:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:13.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:13 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:51:13 np0005475493 python3.9[122565]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:51:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:13 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:13 np0005475493 systemd[1]: session-44.scope: Deactivated successfully.
Oct  8 05:51:13 np0005475493 systemd[1]: session-44.scope: Consumed 1.620s CPU time.
Oct  8 05:51:13 np0005475493 systemd-logind[798]: Session 44 logged out. Waiting for processes to exit.
Oct  8 05:51:13 np0005475493 systemd-logind[798]: Removed session 44.
Oct  8 05:51:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v185: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct  8 05:51:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:15.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:15.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:51:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:51:15] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Oct  8 05:51:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:51:15] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Oct  8 05:51:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v186: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct  8 05:51:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:51:16.966Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:51:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:51:16.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:51:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:17 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000031s ======
Oct  8 05:51:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:17.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct  8 05:51:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:17.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:17 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:17 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:51:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:51:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:51:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:51:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:51:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:51:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:51:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:51:18 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v187: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct  8 05:51:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:51:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:18 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 05:51:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:18 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:51:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:19 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:19.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:19.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:19 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:19 np0005475493 systemd-logind[798]: New session 45 of user zuul.
Oct  8 05:51:19 np0005475493 systemd[1]: Started Session 45 of User zuul.
Oct  8 05:51:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:19 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:20 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v188: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 596 B/s wr, 2 op/s
Oct  8 05:51:20 np0005475493 python3.9[122750]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:51:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:21.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:21.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:21 np0005475493 python3.9[122932]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:51:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 05:51:22 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v189: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Oct  8 05:51:22 np0005475493 python3.9[123108]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:51:22 np0005475493 python3.9[123186]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.zhfsalr3 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:51:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:23 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:23.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:23.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:23 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:51:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:23 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:23 np0005475493 python3.9[123339]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:51:24 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v190: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 05:51:24 np0005475493 python3.9[123418]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.8p15tyji recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:51:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:25 np0005475493 python3.9[123571]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:51:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:25.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:25.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:51:25] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Oct  8 05:51:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:51:25] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Oct  8 05:51:26 np0005475493 python3.9[123723]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:51:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v191: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 937 B/s wr, 2 op/s
Oct  8 05:51:26 np0005475493 python3.9[123802]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:51:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095126 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 05:51:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:51:26.968Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:51:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:51:26.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:51:27 np0005475493 python3.9[123954]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:51:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:27 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:27.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:27.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:27 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:27 np0005475493 python3.9[124033]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:51:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:27 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:28 np0005475493 python3.9[124186]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:51:28 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v192: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct  8 05:51:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:51:28 np0005475493 systemd[92032]: Created slice User Background Tasks Slice.
Oct  8 05:51:28 np0005475493 systemd[92032]: Starting Cleanup of User's Temporary Files and Directories...
Oct  8 05:51:28 np0005475493 systemd[92032]: Finished Cleanup of User's Temporary Files and Directories.
Oct  8 05:51:28 np0005475493 python3.9[124338]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:51:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:29 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:29.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:29.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:29 np0005475493 python3.9[124418]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:51:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:29 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:29 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f040096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:30 np0005475493 python3.9[124571]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:51:30 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v193: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 05:51:30 np0005475493 python3.9[124649]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:51:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:31.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:31.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:31 np0005475493 python3.9[124803]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:51:31 np0005475493 systemd[1]: Reloading.
Oct  8 05:51:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:31 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:51:31 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:51:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v194: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct  8 05:51:32 np0005475493 python3.9[124992]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:51:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:51:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:51:33 np0005475493 python3.9[125070]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:51:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:33 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:33.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:33.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:33 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:51:33 np0005475493 python3.9[125223]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:51:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:33 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:34 np0005475493 python3.9[125302]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:51:34 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v195: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct  8 05:51:35 np0005475493 python3.9[125454]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:51:35 np0005475493 systemd[1]: Reloading.
Oct  8 05:51:35 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:51:35 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:51:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:35 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:35.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:35.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:35 np0005475493 systemd[1]: Starting Create netns directory...
Oct  8 05:51:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:35 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:35 np0005475493 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  8 05:51:35 np0005475493 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  8 05:51:35 np0005475493 systemd[1]: Finished Create netns directory.
Oct  8 05:51:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:35 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:51:35] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Oct  8 05:51:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:51:35] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Oct  8 05:51:36 np0005475493 python3.9[125647]: ansible-ansible.builtin.service_facts Invoked
Oct  8 05:51:36 np0005475493 network[125664]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  8 05:51:36 np0005475493 network[125665]: 'network-scripts' will be removed from distribution in near future.
Oct  8 05:51:36 np0005475493 network[125666]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  8 05:51:36 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v196: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct  8 05:51:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:51:36.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:51:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:37 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:51:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:37.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:51:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:37.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:37 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:37 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:38 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v197: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct  8 05:51:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:51:38 np0005475493 ceph-mgr[73869]: [dashboard INFO request] [192.168.122.100:45916] [POST] [200] [0.002s] [4.0B] [47b87778-d9c2-45ac-9535-7e3cd10eb0ea] /api/prometheus_receiver
Oct  8 05:51:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:39 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:51:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:39.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:51:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:39.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:39 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:39 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v198: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct  8 05:51:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:41 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:41.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000031s ======
Oct  8 05:51:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:41.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct  8 05:51:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:41 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:41 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:42 np0005475493 python3.9[125962]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:51:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v199: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:51:42 np0005475493 python3.9[126040]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:51:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:43 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:43.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:51:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:43.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:51:43 np0005475493 python3.9[126193]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:51:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:43 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:51:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:43 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:44 np0005475493 python3.9[126345]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:51:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v200: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct  8 05:51:44 np0005475493 python3.9[126424]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:51:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:45.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:45.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec0008d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:51:45] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Oct  8 05:51:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:51:45] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Oct  8 05:51:45 np0005475493 python3.9[126578]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  8 05:51:45 np0005475493 systemd[1]: Starting Time & Date Service...
Oct  8 05:51:45 np0005475493 systemd[1]: Started Time & Date Service.
Oct  8 05:51:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v201: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:51:46 np0005475493 python3.9[126735]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:51:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:51:46.970Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:51:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:47 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:47.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:51:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:47.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:51:47 np0005475493 python3.9[126938]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:51:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:47 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:51:47
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', '.rgw.root', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.control', 'default.rgw.meta', 'backups', '.nfs']
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 05:51:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:51:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:51:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 05:51:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v202: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 272 B/s rd, 0 op/s
Oct  8 05:51:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 05:51:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:51:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 05:51:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 05:51:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 05:51:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 05:51:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 05:51:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:51:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:51:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:51:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:47 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:51:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:51:47 np0005475493 python3.9[127047]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:51:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:51:48 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:51:48 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:51:48 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:51:48 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:51:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 05:51:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:51:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:51:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:51:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:51:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:51:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:51:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:51:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:51:48 np0005475493 podman[127261]: 2025-10-08 09:51:48.247757545 +0000 UTC m=+0.041688201 container create d3c3ecaaf3b5dcbf2a25c07d0f3c6d81742adf8a1ba9417ba93e3bcb7d01b0f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_rosalind, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:51:48 np0005475493 systemd[1]: Started libpod-conmon-d3c3ecaaf3b5dcbf2a25c07d0f3c6d81742adf8a1ba9417ba93e3bcb7d01b0f0.scope.
Oct  8 05:51:48 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:51:48 np0005475493 podman[127261]: 2025-10-08 09:51:48.232445123 +0000 UTC m=+0.026375799 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:51:48 np0005475493 podman[127261]: 2025-10-08 09:51:48.341004166 +0000 UTC m=+0.134934842 container init d3c3ecaaf3b5dcbf2a25c07d0f3c6d81742adf8a1ba9417ba93e3bcb7d01b0f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_rosalind, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:51:48 np0005475493 podman[127261]: 2025-10-08 09:51:48.346709465 +0000 UTC m=+0.140640121 container start d3c3ecaaf3b5dcbf2a25c07d0f3c6d81742adf8a1ba9417ba93e3bcb7d01b0f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_rosalind, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:51:48 np0005475493 podman[127261]: 2025-10-08 09:51:48.349885755 +0000 UTC m=+0.143816411 container attach d3c3ecaaf3b5dcbf2a25c07d0f3c6d81742adf8a1ba9417ba93e3bcb7d01b0f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_rosalind, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:51:48 np0005475493 eloquent_rosalind[127306]: 167 167
Oct  8 05:51:48 np0005475493 systemd[1]: libpod-d3c3ecaaf3b5dcbf2a25c07d0f3c6d81742adf8a1ba9417ba93e3bcb7d01b0f0.scope: Deactivated successfully.
Oct  8 05:51:48 np0005475493 podman[127261]: 2025-10-08 09:51:48.35326155 +0000 UTC m=+0.147192206 container died d3c3ecaaf3b5dcbf2a25c07d0f3c6d81742adf8a1ba9417ba93e3bcb7d01b0f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:51:48 np0005475493 systemd[1]: var-lib-containers-storage-overlay-03755b4cb9422064ecdd9bd2028955c9906f88f2f9290616f773c41624281c16-merged.mount: Deactivated successfully.
Oct  8 05:51:48 np0005475493 podman[127261]: 2025-10-08 09:51:48.39078887 +0000 UTC m=+0.184719526 container remove d3c3ecaaf3b5dcbf2a25c07d0f3c6d81742adf8a1ba9417ba93e3bcb7d01b0f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_rosalind, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:51:48 np0005475493 systemd[1]: libpod-conmon-d3c3ecaaf3b5dcbf2a25c07d0f3c6d81742adf8a1ba9417ba93e3bcb7d01b0f0.scope: Deactivated successfully.
Oct  8 05:51:48 np0005475493 python3.9[127303]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:51:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:51:48 np0005475493 podman[127333]: 2025-10-08 09:51:48.569630631 +0000 UTC m=+0.044673725 container create 7d34c05a92d368efdcc8eadc3b82af4f3602aa28652d8b3df3bfd339f3b0134e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_sammet, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:51:48 np0005475493 systemd[1]: Started libpod-conmon-7d34c05a92d368efdcc8eadc3b82af4f3602aa28652d8b3df3bfd339f3b0134e.scope.
Oct  8 05:51:48 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:51:48 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400b23886f773e2595d0f2d668bde66a114d3ed02de88a8b256df0c4fb920bc0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:51:48 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400b23886f773e2595d0f2d668bde66a114d3ed02de88a8b256df0c4fb920bc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:51:48 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400b23886f773e2595d0f2d668bde66a114d3ed02de88a8b256df0c4fb920bc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:51:48 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400b23886f773e2595d0f2d668bde66a114d3ed02de88a8b256df0c4fb920bc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:51:48 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400b23886f773e2595d0f2d668bde66a114d3ed02de88a8b256df0c4fb920bc0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:51:48 np0005475493 podman[127333]: 2025-10-08 09:51:48.630365 +0000 UTC m=+0.105408124 container init 7d34c05a92d368efdcc8eadc3b82af4f3602aa28652d8b3df3bfd339f3b0134e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_sammet, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  8 05:51:48 np0005475493 podman[127333]: 2025-10-08 09:51:48.640958473 +0000 UTC m=+0.116001567 container start 7d34c05a92d368efdcc8eadc3b82af4f3602aa28652d8b3df3bfd339f3b0134e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct  8 05:51:48 np0005475493 podman[127333]: 2025-10-08 09:51:48.643782002 +0000 UTC m=+0.118825096 container attach 7d34c05a92d368efdcc8eadc3b82af4f3602aa28652d8b3df3bfd339f3b0134e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_sammet, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  8 05:51:48 np0005475493 podman[127333]: 2025-10-08 09:51:48.553968989 +0000 UTC m=+0.029012113 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:51:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:51:48.844Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:51:48 np0005475493 python3.9[127428]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.naax4c3o recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:51:48 np0005475493 heuristic_sammet[127384]: --> passed data devices: 0 physical, 1 LVM
Oct  8 05:51:48 np0005475493 heuristic_sammet[127384]: --> All data devices are unavailable
Oct  8 05:51:48 np0005475493 systemd[1]: libpod-7d34c05a92d368efdcc8eadc3b82af4f3602aa28652d8b3df3bfd339f3b0134e.scope: Deactivated successfully.
Oct  8 05:51:48 np0005475493 podman[127333]: 2025-10-08 09:51:48.986500933 +0000 UTC m=+0.461544027 container died 7d34c05a92d368efdcc8eadc3b82af4f3602aa28652d8b3df3bfd339f3b0134e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_sammet, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:51:49 np0005475493 systemd[1]: var-lib-containers-storage-overlay-400b23886f773e2595d0f2d668bde66a114d3ed02de88a8b256df0c4fb920bc0-merged.mount: Deactivated successfully.
Oct  8 05:51:49 np0005475493 podman[127333]: 2025-10-08 09:51:49.034823812 +0000 UTC m=+0.509866906 container remove 7d34c05a92d368efdcc8eadc3b82af4f3602aa28652d8b3df3bfd339f3b0134e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_sammet, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:51:49 np0005475493 systemd[1]: libpod-conmon-7d34c05a92d368efdcc8eadc3b82af4f3602aa28652d8b3df3bfd339f3b0134e.scope: Deactivated successfully.
Oct  8 05:51:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:49 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec0008d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:51:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:49.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:51:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:49.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:49 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:49 np0005475493 python3.9[127652]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:51:49 np0005475493 podman[127692]: 2025-10-08 09:51:49.564163689 +0000 UTC m=+0.035750465 container create 182a53544e618cc294c90669b19634aec1c334e8125c9b82fcc06564d5009dcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_herschel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:51:49 np0005475493 systemd[1]: Started libpod-conmon-182a53544e618cc294c90669b19634aec1c334e8125c9b82fcc06564d5009dcd.scope.
Oct  8 05:51:49 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:51:49 np0005475493 podman[127692]: 2025-10-08 09:51:49.546951308 +0000 UTC m=+0.018538094 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:51:49 np0005475493 podman[127692]: 2025-10-08 09:51:49.644853164 +0000 UTC m=+0.116439940 container init 182a53544e618cc294c90669b19634aec1c334e8125c9b82fcc06564d5009dcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_herschel, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:51:49 np0005475493 podman[127692]: 2025-10-08 09:51:49.653168806 +0000 UTC m=+0.124755582 container start 182a53544e618cc294c90669b19634aec1c334e8125c9b82fcc06564d5009dcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_herschel, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:51:49 np0005475493 podman[127692]: 2025-10-08 09:51:49.657129431 +0000 UTC m=+0.128716227 container attach 182a53544e618cc294c90669b19634aec1c334e8125c9b82fcc06564d5009dcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_herschel, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct  8 05:51:49 np0005475493 tender_herschel[127710]: 167 167
Oct  8 05:51:49 np0005475493 systemd[1]: libpod-182a53544e618cc294c90669b19634aec1c334e8125c9b82fcc06564d5009dcd.scope: Deactivated successfully.
Oct  8 05:51:49 np0005475493 conmon[127710]: conmon 182a53544e618cc294c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-182a53544e618cc294c90669b19634aec1c334e8125c9b82fcc06564d5009dcd.scope/container/memory.events
Oct  8 05:51:49 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v203: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 363 B/s rd, 0 op/s
Oct  8 05:51:49 np0005475493 podman[127738]: 2025-10-08 09:51:49.698862902 +0000 UTC m=+0.026104482 container died 182a53544e618cc294c90669b19634aec1c334e8125c9b82fcc06564d5009dcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:51:49 np0005475493 systemd[1]: var-lib-containers-storage-overlay-50322bdba410914bfe547c787c2ba64bfecdb58e78265ba3fb06aef26f6af948-merged.mount: Deactivated successfully.
Oct  8 05:51:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:49 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:49 np0005475493 podman[127738]: 2025-10-08 09:51:49.731136326 +0000 UTC m=+0.058377906 container remove 182a53544e618cc294c90669b19634aec1c334e8125c9b82fcc06564d5009dcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  8 05:51:49 np0005475493 systemd[1]: libpod-conmon-182a53544e618cc294c90669b19634aec1c334e8125c9b82fcc06564d5009dcd.scope: Deactivated successfully.
Oct  8 05:51:49 np0005475493 podman[127813]: 2025-10-08 09:51:49.89478315 +0000 UTC m=+0.051036105 container create 145b4685da2b31a7b45acbcf4e7eb951a1ac78af9c2de7b22ddecdda5d301070 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hellman, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:51:49 np0005475493 systemd[1]: Started libpod-conmon-145b4685da2b31a7b45acbcf4e7eb951a1ac78af9c2de7b22ddecdda5d301070.scope.
Oct  8 05:51:49 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:51:49 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/883815bbc65ff700f3c0ed79ffd724df4b937d6808b3691ccd9fb2dc0551037f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:51:49 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/883815bbc65ff700f3c0ed79ffd724df4b937d6808b3691ccd9fb2dc0551037f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:51:49 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/883815bbc65ff700f3c0ed79ffd724df4b937d6808b3691ccd9fb2dc0551037f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:51:49 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/883815bbc65ff700f3c0ed79ffd724df4b937d6808b3691ccd9fb2dc0551037f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:51:49 np0005475493 podman[127813]: 2025-10-08 09:51:49.87822451 +0000 UTC m=+0.034477495 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:51:49 np0005475493 podman[127813]: 2025-10-08 09:51:49.981779404 +0000 UTC m=+0.138032379 container init 145b4685da2b31a7b45acbcf4e7eb951a1ac78af9c2de7b22ddecdda5d301070 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hellman, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  8 05:51:49 np0005475493 podman[127813]: 2025-10-08 09:51:49.989092144 +0000 UTC m=+0.145345099 container start 145b4685da2b31a7b45acbcf4e7eb951a1ac78af9c2de7b22ddecdda5d301070 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:51:49 np0005475493 podman[127813]: 2025-10-08 09:51:49.993157752 +0000 UTC m=+0.149410697 container attach 145b4685da2b31a7b45acbcf4e7eb951a1ac78af9c2de7b22ddecdda5d301070 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hellman, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:51:50 np0005475493 python3.9[127808]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]: {
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:    "1": [
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:        {
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:            "devices": [
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:                "/dev/loop3"
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:            ],
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:            "lv_name": "ceph_lv0",
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:            "lv_size": "21470642176",
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:            "name": "ceph_lv0",
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:            "tags": {
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:                "ceph.cephx_lockbox_secret": "",
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:                "ceph.cluster_name": "ceph",
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:                "ceph.crush_device_class": "",
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:                "ceph.encrypted": "0",
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:                "ceph.osd_id": "1",
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:                "ceph.type": "block",
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:                "ceph.vdo": "0",
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:                "ceph.with_tpm": "0"
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:            },
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:            "type": "block",
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:            "vg_name": "ceph_vg0"
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:        }
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]:    ]
Oct  8 05:51:50 np0005475493 admiring_hellman[127831]: }
Oct  8 05:51:50 np0005475493 systemd[1]: libpod-145b4685da2b31a7b45acbcf4e7eb951a1ac78af9c2de7b22ddecdda5d301070.scope: Deactivated successfully.
Oct  8 05:51:50 np0005475493 podman[127813]: 2025-10-08 09:51:50.307437819 +0000 UTC m=+0.463690774 container died 145b4685da2b31a7b45acbcf4e7eb951a1ac78af9c2de7b22ddecdda5d301070 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:51:50 np0005475493 systemd[1]: var-lib-containers-storage-overlay-883815bbc65ff700f3c0ed79ffd724df4b937d6808b3691ccd9fb2dc0551037f-merged.mount: Deactivated successfully.
Oct  8 05:51:50 np0005475493 podman[127813]: 2025-10-08 09:51:50.349454339 +0000 UTC m=+0.505707314 container remove 145b4685da2b31a7b45acbcf4e7eb951a1ac78af9c2de7b22ddecdda5d301070 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hellman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  8 05:51:50 np0005475493 systemd[1]: libpod-conmon-145b4685da2b31a7b45acbcf4e7eb951a1ac78af9c2de7b22ddecdda5d301070.scope: Deactivated successfully.
Oct  8 05:51:50 np0005475493 podman[128091]: 2025-10-08 09:51:50.901705416 +0000 UTC m=+0.055939929 container create 384214a2fe712386aaa093fa93310b0e6074b306d3386bed1f409bc15ef1976f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_ramanujan, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:51:50 np0005475493 systemd[1]: Started libpod-conmon-384214a2fe712386aaa093fa93310b0e6074b306d3386bed1f409bc15ef1976f.scope.
Oct  8 05:51:50 np0005475493 python3.9[128076]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:51:50 np0005475493 podman[128091]: 2025-10-08 09:51:50.868856964 +0000 UTC m=+0.023091507 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:51:50 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:51:51 np0005475493 podman[128091]: 2025-10-08 09:51:51.058311628 +0000 UTC m=+0.212546171 container init 384214a2fe712386aaa093fa93310b0e6074b306d3386bed1f409bc15ef1976f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  8 05:51:51 np0005475493 podman[128091]: 2025-10-08 09:51:51.064581756 +0000 UTC m=+0.218816269 container start 384214a2fe712386aaa093fa93310b0e6074b306d3386bed1f409bc15ef1976f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:51:51 np0005475493 reverent_ramanujan[128108]: 167 167
Oct  8 05:51:51 np0005475493 systemd[1]: libpod-384214a2fe712386aaa093fa93310b0e6074b306d3386bed1f409bc15ef1976f.scope: Deactivated successfully.
Oct  8 05:51:51 np0005475493 podman[128091]: 2025-10-08 09:51:51.084295765 +0000 UTC m=+0.238530278 container attach 384214a2fe712386aaa093fa93310b0e6074b306d3386bed1f409bc15ef1976f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_ramanujan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  8 05:51:51 np0005475493 podman[128091]: 2025-10-08 09:51:51.084610235 +0000 UTC m=+0.238844748 container died 384214a2fe712386aaa093fa93310b0e6074b306d3386bed1f409bc15ef1976f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct  8 05:51:51 np0005475493 systemd[1]: var-lib-containers-storage-overlay-d4878b8c21afde0760d06941b7e1a5217c806e776c439ba57b20fba33abb9cdf-merged.mount: Deactivated successfully.
Oct  8 05:51:51 np0005475493 podman[128091]: 2025-10-08 09:51:51.158430985 +0000 UTC m=+0.312665498 container remove 384214a2fe712386aaa093fa93310b0e6074b306d3386bed1f409bc15ef1976f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  8 05:51:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:51 np0005475493 systemd[1]: libpod-conmon-384214a2fe712386aaa093fa93310b0e6074b306d3386bed1f409bc15ef1976f.scope: Deactivated successfully.
Oct  8 05:51:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:51.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:51:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:51.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:51:51 np0005475493 podman[128212]: 2025-10-08 09:51:51.359796603 +0000 UTC m=+0.054290006 container create bf2fa72de15fb466db811f25640897d5209f9e469f8632e821e5ac7b2bb17d17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct  8 05:51:51 np0005475493 systemd[1]: Started libpod-conmon-bf2fa72de15fb466db811f25640897d5209f9e469f8632e821e5ac7b2bb17d17.scope.
Oct  8 05:51:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:51 np0005475493 podman[128212]: 2025-10-08 09:51:51.335096807 +0000 UTC m=+0.029590280 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:51:51 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:51:51 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2953a4a5938efd4a88f0edace792acd8824adb9f521f634c586fe5be3d2a6eeb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:51:51 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2953a4a5938efd4a88f0edace792acd8824adb9f521f634c586fe5be3d2a6eeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:51:51 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2953a4a5938efd4a88f0edace792acd8824adb9f521f634c586fe5be3d2a6eeb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:51:51 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2953a4a5938efd4a88f0edace792acd8824adb9f521f634c586fe5be3d2a6eeb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:51:51 np0005475493 podman[128212]: 2025-10-08 09:51:51.456870885 +0000 UTC m=+0.151364278 container init bf2fa72de15fb466db811f25640897d5209f9e469f8632e821e5ac7b2bb17d17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  8 05:51:51 np0005475493 podman[128212]: 2025-10-08 09:51:51.464028189 +0000 UTC m=+0.158521562 container start bf2fa72de15fb466db811f25640897d5209f9e469f8632e821e5ac7b2bb17d17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_yalow, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:51:51 np0005475493 podman[128212]: 2025-10-08 09:51:51.469691957 +0000 UTC m=+0.164185350 container attach bf2fa72de15fb466db811f25640897d5209f9e469f8632e821e5ac7b2bb17d17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_yalow, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  8 05:51:51 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v204: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 272 B/s rd, 0 op/s
Oct  8 05:51:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:51 np0005475493 python3[128310]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  8 05:51:52 np0005475493 lvm[128427]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 05:51:52 np0005475493 lvm[128427]: VG ceph_vg0 finished
Oct  8 05:51:52 np0005475493 gallant_yalow[128229]: {}
Oct  8 05:51:52 np0005475493 systemd[1]: libpod-bf2fa72de15fb466db811f25640897d5209f9e469f8632e821e5ac7b2bb17d17.scope: Deactivated successfully.
Oct  8 05:51:52 np0005475493 podman[128212]: 2025-10-08 09:51:52.141221873 +0000 UTC m=+0.835715246 container died bf2fa72de15fb466db811f25640897d5209f9e469f8632e821e5ac7b2bb17d17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  8 05:51:52 np0005475493 systemd[1]: libpod-bf2fa72de15fb466db811f25640897d5209f9e469f8632e821e5ac7b2bb17d17.scope: Consumed 1.082s CPU time.
Oct  8 05:51:52 np0005475493 systemd[1]: var-lib-containers-storage-overlay-2953a4a5938efd4a88f0edace792acd8824adb9f521f634c586fe5be3d2a6eeb-merged.mount: Deactivated successfully.
Oct  8 05:51:52 np0005475493 podman[128212]: 2025-10-08 09:51:52.18214558 +0000 UTC m=+0.876638953 container remove bf2fa72de15fb466db811f25640897d5209f9e469f8632e821e5ac7b2bb17d17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_yalow, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  8 05:51:52 np0005475493 systemd[1]: libpod-conmon-bf2fa72de15fb466db811f25640897d5209f9e469f8632e821e5ac7b2bb17d17.scope: Deactivated successfully.
Oct  8 05:51:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:51:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:51:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:51:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:51:52 np0005475493 python3.9[128570]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:51:52 np0005475493 python3.9[128648]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:51:53 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:51:53 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:51:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:53 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:53.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:51:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:53.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:51:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:53 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:51:53 np0005475493 python3.9[128801]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:51:53 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v205: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 454 B/s rd, 0 op/s
Oct  8 05:51:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:53 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:54 np0005475493 python3.9[128880]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:51:54 np0005475493 python3.9[129032]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:51:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:55 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:55 np0005475493 python3.9[129111]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:51:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:55.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:55.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:55 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:55 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v206: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 272 B/s rd, 0 op/s
Oct  8 05:51:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:51:55] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  8 05:51:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:55 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:51:55] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  8 05:51:55 np0005475493 python3.9[129263]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:51:56 np0005475493 python3.9[129342]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:51:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:51:56.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:51:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:57 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:57 np0005475493 python3.9[129494]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:51:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:51:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:57.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:51:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:51:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:57.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:51:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:57 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:57 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v207: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 272 B/s rd, 0 op/s
Oct  8 05:51:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:57 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec002eb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:57 np0005475493 python3.9[129573]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:51:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:51:58 np0005475493 python3.9[129728]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:51:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:51:58.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:51:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:59 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec002eb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:51:59.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:51:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:51:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:51:59.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:51:59 np0005475493 python3.9[129884]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:51:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:59 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:51:59 np0005475493 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  8 05:51:59 np0005475493 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  8 05:51:59 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v208: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct  8 05:51:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:51:59 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:00 np0005475493 python3.9[130038]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:52:01 np0005475493 python3.9[130190]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:52:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:01 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:01.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:01.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:01 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec002eb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:01 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v209: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:52:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:01 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:01 np0005475493 python3.9[130368]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  8 05:52:02 np0005475493 python3.9[130521]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  8 05:52:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:52:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:52:03 np0005475493 systemd-logind[798]: Session 45 logged out. Waiting for processes to exit.
Oct  8 05:52:03 np0005475493 systemd[1]: session-45.scope: Deactivated successfully.
Oct  8 05:52:03 np0005475493 systemd[1]: session-45.scope: Consumed 29.215s CPU time.
Oct  8 05:52:03 np0005475493 systemd-logind[798]: Removed session 45.
Oct  8 05:52:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:03 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:03.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:03.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:03 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:52:03 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v210: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct  8 05:52:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:03 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec003fb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:05 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:05.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:05.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:05 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:05 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v211: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:52:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:52:05] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Oct  8 05:52:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:52:05] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Oct  8 05:52:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:05 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:06.972Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:52:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:06.972Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:52:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:07 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef80041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:07.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:07.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:07 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:07 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v212: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:52:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:07 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:52:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:08.847Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:52:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:08.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:52:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:09 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:09.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:09.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:09 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:09 np0005475493 systemd-logind[798]: New session 46 of user zuul.
Oct  8 05:52:09 np0005475493 systemd[1]: Started Session 46 of User zuul.
Oct  8 05:52:09 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v213: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct  8 05:52:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:09 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:10 np0005475493 python3.9[130709]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct  8 05:52:11 np0005475493 python3.9[130861]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:52:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:11 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec003fb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:11.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:11.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:11 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:11 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v214: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:52:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:11 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:11 np0005475493 python3.9[131016]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct  8 05:52:12 np0005475493 python3.9[131169]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.41qg5f3a follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:52:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:13 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:13.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:52:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:13.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:52:13 np0005475493 python3.9[131295]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.41qg5f3a mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917132.1152978-102-20639645602985/.source.41qg5f3a _original_basename=.q76e5zuf follow=False checksum=645509817f1020adcb4b475a04ffc8472d1fc5c9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:52:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:13 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec003fb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:52:13 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v215: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct  8 05:52:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:13 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:14 np0005475493 python3.9[131448]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:52:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:52:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:15.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:52:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:52:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:15.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:52:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:15 np0005475493 python3.9[131601]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCH7J4/vrAjqY7b3+xDoxlOrkvqhtdMtNCRu8feksOJjh2Lg2Yk5a4TpRFHHcUew6Or+BSrCAe5KLIJookdMX3AnHBTeYgFVrph2Ke0jsZhtIDdYFPya4HaYgVScxezyYjpFJsOgHIasA47X1Ai7KtSHamdGUMHvyRPFaMroDQGOH5uNA58Pr0jAvA9/p32JhzVhvFTNhdp5AZuuf53LCOoAJPpvxAfhZJVwv0zpQu1qJ2MQ4F6PjmLmpJe9IFedhTbswP4+A8raCmSvJK/X3zbL6A5C78i72YF0dVlX4E5Jgq2BymgfJXA2vRrB7WzfFXN/KCT+A6KjshRy8vEZTlewfHk3bMt+IjAgRaPsvV2gwOQb0lhzfUX2RkPxHTTunUAUf1PJwBTKah0plZAQoGQce+8MWTqKP842KIoZPO7/LQQZR21apoIRIEt1OtR3pITkULZqmoYaZKqVzPCyoagXj2v0W4E//8slRvaC4n2qfMRwvp2VR0mSv9qwMeqnm0=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMt6YRNNCvMAUwHQzPKNq18k03sF+qAP+8fg1vdKmMsQ#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN1LMOBquYaNyOmBNhqWyrm3Ot0C+prylWlOCYwa7IIp3WZH4GHwVhjD6VAwSa/KvI01xKiiJwO/WJ4zgAnMAiM=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYQPNjF86l7L2Hj2/ras4UwWV1W/v43YSKx2wyuHDdieMiPaKbrXfDkjmyzUBERrbiTo1QPGQAMAmA2ykBglPN8r/+0SzTmZFPysM5MwJdoYFoZLOFzs9ldQJxEusbWvZnvF+I9UgftR9Kc0etIrQ6xgLbAtGZNGqj5b2kDFCC3J7RJB10JjuqkZ7faqGp+JLC/txEe9rDOAOpOpa885Sx+ZK+5P8OmEbpqHH3vL1O9we9lyRIs2Y/RpIrncEKyaA84WKimjvp832GDFqVGlFklY8lsH31+AUKXfk65cwhnczZO7DTB1/+0QUWhiy+uUUKLdJ1C3AFfHNBBH0WWHolNsPiYjSaNrUIgxXyRLkGtLeTAtEa9LNniw8KKCXI/jptXVVqyfHGOFIzo11NDDSTeCPpVG2MrjX9vJZknGeShJLavvHzVmc1N/zNpgq0Rr0FEyFZL384e8WgnmTY1lBf7tAPdMyIaNEJgEE4MobwqVDSwMmgWKmKoOeY5jsWNlM=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHzclsFPuApUw4nYRrZrI5lJm2aKty4lBzS+387uCINA#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCmuS8ms5fq9IWCpSG062zv6KqUIHSk9g+RlcFiU/nKSB1OMQ56HhCeuGAOEbiyfVsMqC143W9W+Q6X1JDoRkcg=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCp3Vp6dX4ruCK781x4GIhtAtcJdT75tsPxH3O/YwMPa1JuQj17BT+IZbu0qvi56CLtWm5GwO9cF5N1u+ZpYWIwNbEJlz4q4LeJud7OFwwvwDTdM2fZylZt2dEtwqbmDJUsJxwcLQshtmSxpRR5Z53dCJAMTZiKGF/MiJrVkc7A2PfxMnLH568W9poUGj9jUYetHoRmwKl9hes+OQRljbjUi8gLpseivGxW9IAewXRhJi0ybLNDnQM0iSkdQqaTVD7laQKxpynfO1a0b7U6oyFRdyTqMJqyDKe8Vx+D1esV9oZKn7UEtj+WGUAv3StaLzrk3fjhi4XePCs0Ao1s/B1MPZCcM0Po5BdHAHhf4CbUSRS+oaAS7KaaWkWTKLTKEDWS6DjX6KUR9hUyLQ54IMYu17UP6JclJnH5c9FmUQls07pus/CkhX0IIgOTinLYeOJSdBsKA9JUrnQzXKMAwzjKL18kG8OZ+Yaf7msme1EVikR9ljtRB88k+DtapF5wub8=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBDnMNJEcPeKIHMEAdXUabsWNwdNGhiYyZLatE1eeBqY#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLDW7MDD+6+vPlFKWCI8yHUVjDpLwcAatqV8Xhxm53MJMkyP9vCai5lIMwJluZIDUkA83WhSi06EgMc1afHFONA=#012 create=True mode=0644 path=/tmp/ansible.41qg5f3a state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:52:15 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v216: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:52:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:52:15] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Oct  8 05:52:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:52:15] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Oct  8 05:52:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec003fb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:15 np0005475493 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  8 05:52:16 np0005475493 python3.9[131756]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.41qg5f3a' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:52:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:16.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:52:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:17 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:17 np0005475493 python3.9[131911]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.41qg5f3a state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:52:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:52:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:17.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:52:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:17.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:17 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:17 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v217: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:52:17 np0005475493 systemd[1]: session-46.scope: Deactivated successfully.
Oct  8 05:52:17 np0005475493 systemd[1]: session-46.scope: Consumed 4.976s CPU time.
Oct  8 05:52:17 np0005475493 systemd-logind[798]: Session 46 logged out. Waiting for processes to exit.
Oct  8 05:52:17 np0005475493 systemd-logind[798]: Removed session 46.
Oct  8 05:52:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:17 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04009190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:52:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:52:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:52:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:52:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:52:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:52:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:52:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:52:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:52:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:18.848Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:52:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:18.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:52:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:19 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec003fb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:19.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:52:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:19.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:52:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:19 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:19 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v218: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct  8 05:52:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:19 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04009190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:21.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:21.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec003fb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:21 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v219: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:52:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:22 np0005475493 systemd-logind[798]: New session 47 of user zuul.
Oct  8 05:52:22 np0005475493 systemd[1]: Started Session 47 of User zuul.
Oct  8 05:52:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:23 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:23.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:23.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:23 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04009190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:52:23 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v220: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct  8 05:52:23 np0005475493 python3.9[132120]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:52:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:23 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec003fb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:24 np0005475493 systemd[1]: session-19.scope: Deactivated successfully.
Oct  8 05:52:24 np0005475493 systemd[1]: session-19.scope: Consumed 1min 32.168s CPU time.
Oct  8 05:52:24 np0005475493 systemd-logind[798]: Session 19 logged out. Waiting for processes to exit.
Oct  8 05:52:24 np0005475493 systemd-logind[798]: Removed session 19.
Oct  8 05:52:25 np0005475493 python3.9[132277]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  8 05:52:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:52:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:25.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:52:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:52:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:25.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:52:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:25 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v221: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:52:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:52:25] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Oct  8 05:52:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:52:25] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Oct  8 05:52:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04009190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:25 np0005475493 python3.9[132432]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  8 05:52:26 np0005475493 python3.9[132586]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:52:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:26.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:52:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:27 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec003fb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:27.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:27.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:27 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:27 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v222: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:52:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:27 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:27 np0005475493 python3.9[132740]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:52:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:52:28 np0005475493 python3.9[132893]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:52:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:28.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:52:29 np0005475493 systemd[1]: session-47.scope: Deactivated successfully.
Oct  8 05:52:29 np0005475493 systemd[1]: session-47.scope: Consumed 3.959s CPU time.
Oct  8 05:52:29 np0005475493 systemd-logind[798]: Session 47 logged out. Waiting for processes to exit.
Oct  8 05:52:29 np0005475493 systemd-logind[798]: Removed session 47.
Oct  8 05:52:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:29 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:29.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:29.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:29 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec003fb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:29 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v223: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct  8 05:52:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:29 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:31.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:31.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:31 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v224: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:52:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:52:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:52:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:33 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:33.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:33.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:33 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:52:33 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v225: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct  8 05:52:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:33 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:34 np0005475493 systemd-logind[798]: New session 48 of user zuul.
Oct  8 05:52:34 np0005475493 systemd[1]: Started Session 48 of User zuul.
Oct  8 05:52:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:35 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:35.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:35.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:35 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:35 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v226: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:52:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:52:35] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Oct  8 05:52:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:52:35] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Oct  8 05:52:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:35 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:36 np0005475493 python3.9[133081]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:52:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:36.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:52:37 np0005475493 python3.9[133238]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  8 05:52:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:37 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:37.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:52:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:37.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:52:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:37 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:37 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v227: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:52:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:37 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:37 np0005475493 python3.9[133323]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  8 05:52:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:52:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:38.851Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:52:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:39 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:39.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:39.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:39 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:39 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v228: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct  8 05:52:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:39 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:40 np0005475493 python3.9[133477]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:52:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:41 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:41.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:52:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:41.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:52:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:41 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:41 np0005475493 python3.9[133654]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  8 05:52:41 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v229: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:52:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:41 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:42 np0005475493 python3.9[133805]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:52:42 np0005475493 python3.9[133955]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:52:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:43 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:43.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:43 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:43.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.533342) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917163533381, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1982, "num_deletes": 251, "total_data_size": 3894857, "memory_usage": 3951704, "flush_reason": "Manual Compaction"}
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917163544932, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2343400, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10780, "largest_seqno": 12761, "table_properties": {"data_size": 2336695, "index_size": 3583, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16477, "raw_average_key_size": 20, "raw_value_size": 2322072, "raw_average_value_size": 2863, "num_data_blocks": 157, "num_entries": 811, "num_filter_entries": 811, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916974, "oldest_key_time": 1759916974, "file_creation_time": 1759917163, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 11638 microseconds, and 5497 cpu microseconds.
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.544982) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2343400 bytes OK
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.545001) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.546202) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.546221) EVENT_LOG_v1 {"time_micros": 1759917163546216, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.546241) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3886780, prev total WAL file size 3886780, number of live WAL files 2.
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.547522) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2288KB)], [26(13MB)]
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917163547599, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 16583344, "oldest_snapshot_seqno": -1}
Oct  8 05:52:43 np0005475493 systemd[1]: session-48.scope: Deactivated successfully.
Oct  8 05:52:43 np0005475493 systemd[1]: session-48.scope: Consumed 5.817s CPU time.
Oct  8 05:52:43 np0005475493 systemd-logind[798]: Session 48 logged out. Waiting for processes to exit.
Oct  8 05:52:43 np0005475493 systemd-logind[798]: Removed session 48.
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4411 keys, 14671363 bytes, temperature: kUnknown
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917163607566, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 14671363, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14637639, "index_size": 21582, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 111216, "raw_average_key_size": 25, "raw_value_size": 14552948, "raw_average_value_size": 3299, "num_data_blocks": 924, "num_entries": 4411, "num_filter_entries": 4411, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759917163, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.607794) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14671363 bytes
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.608799) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 276.2 rd, 244.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 13.6 +0.0 blob) out(14.0 +0.0 blob), read-write-amplify(13.3) write-amplify(6.3) OK, records in: 4843, records dropped: 432 output_compression: NoCompression
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.608817) EVENT_LOG_v1 {"time_micros": 1759917163608809, "job": 10, "event": "compaction_finished", "compaction_time_micros": 60042, "compaction_time_cpu_micros": 30171, "output_level": 6, "num_output_files": 1, "total_output_size": 14671363, "num_input_records": 4843, "num_output_records": 4411, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917163609300, "job": 10, "event": "table_file_deletion", "file_number": 28}
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917163611903, "job": 10, "event": "table_file_deletion", "file_number": 26}
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.547388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.611989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.611995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.611996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.611998) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 05:52:43 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:52:43.611999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 05:52:43 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v230: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct  8 05:52:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:43 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed80032b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:45.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:45.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:45 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v231: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:52:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:52:45] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Oct  8 05:52:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:52:45] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Oct  8 05:52:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:46.975Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:52:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:47 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:47.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:47 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed80032b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:47.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:52:47
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'images', 'vms', '.mgr', 'volumes', '.rgw.root', 'default.rgw.control', '.nfs', 'default.rgw.log']
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v232: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 05:52:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:47 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:52:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:52:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:52:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 05:52:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:52:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:52:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:52:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:52:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:52:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:52:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:52:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:52:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:52:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:48.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:52:48 np0005475493 systemd-logind[798]: New session 49 of user zuul.
Oct  8 05:52:48 np0005475493 systemd[1]: Started Session 49 of User zuul.
Oct  8 05:52:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:49 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:49.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:49 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:52:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:49.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:52:49 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v233: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct  8 05:52:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:49 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed80032b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:50 np0005475493 python3.9[134141]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:52:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:51.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0003f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:51.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:51 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v234: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:52:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:51 np0005475493 python3.9[134299]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:52:52 np0005475493 python3.9[134452]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:52:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:52:53 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:52:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 05:52:53 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:52:53 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v235: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 267 B/s rd, 0 op/s
Oct  8 05:52:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 05:52:53 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:52:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 05:52:53 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:52:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 05:52:53 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 05:52:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 05:52:53 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:52:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:52:53 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:52:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:53 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed80032b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:53 np0005475493 python3.9[134686]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:52:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:53.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:53 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:53.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:52:53 np0005475493 podman[134849]: 2025-10-08 09:52:53.696845972 +0000 UTC m=+0.048692259 container create 865943d4a73bf043deab8d175e7ea29abb151e2db4d3408da23be58e10492ea5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  8 05:52:53 np0005475493 systemd[1]: Started libpod-conmon-865943d4a73bf043deab8d175e7ea29abb151e2db4d3408da23be58e10492ea5.scope.
Oct  8 05:52:53 np0005475493 podman[134849]: 2025-10-08 09:52:53.678094961 +0000 UTC m=+0.029941268 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:52:53 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:52:53 np0005475493 podman[134849]: 2025-10-08 09:52:53.789476045 +0000 UTC m=+0.141322342 container init 865943d4a73bf043deab8d175e7ea29abb151e2db4d3408da23be58e10492ea5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_keldysh, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Oct  8 05:52:53 np0005475493 podman[134849]: 2025-10-08 09:52:53.796325773 +0000 UTC m=+0.148172060 container start 865943d4a73bf043deab8d175e7ea29abb151e2db4d3408da23be58e10492ea5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:52:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:53 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee00040d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:53 np0005475493 podman[134849]: 2025-10-08 09:52:53.799855257 +0000 UTC m=+0.151701544 container attach 865943d4a73bf043deab8d175e7ea29abb151e2db4d3408da23be58e10492ea5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_keldysh, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct  8 05:52:53 np0005475493 zealous_keldysh[134912]: 167 167
Oct  8 05:52:53 np0005475493 systemd[1]: libpod-865943d4a73bf043deab8d175e7ea29abb151e2db4d3408da23be58e10492ea5.scope: Deactivated successfully.
Oct  8 05:52:53 np0005475493 podman[134849]: 2025-10-08 09:52:53.803360119 +0000 UTC m=+0.155206416 container died 865943d4a73bf043deab8d175e7ea29abb151e2db4d3408da23be58e10492ea5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_keldysh, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:52:53 np0005475493 systemd[1]: var-lib-containers-storage-overlay-31ff4be75cf2cb65c051caa49347eefb8bf505f88b1ecf023c5c322fe907c03c-merged.mount: Deactivated successfully.
Oct  8 05:52:53 np0005475493 podman[134849]: 2025-10-08 09:52:53.854575887 +0000 UTC m=+0.206422204 container remove 865943d4a73bf043deab8d175e7ea29abb151e2db4d3408da23be58e10492ea5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:52:53 np0005475493 systemd[1]: libpod-conmon-865943d4a73bf043deab8d175e7ea29abb151e2db4d3408da23be58e10492ea5.scope: Deactivated successfully.
Oct  8 05:52:53 np0005475493 python3.9[134921]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917172.7414677-160-274451087208633/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=03e9ebef9d51a593a38c809f93442d2e40b72597 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:52:54 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:52:54 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:52:54 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:52:54 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:52:54 np0005475493 podman[134943]: 2025-10-08 09:52:54.067642072 +0000 UTC m=+0.049501524 container create d3e0c68d9da20e68a8ec0754135114be32524b84f3bdddd9943f65c019c76f12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_margulis, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct  8 05:52:54 np0005475493 systemd[1]: Started libpod-conmon-d3e0c68d9da20e68a8ec0754135114be32524b84f3bdddd9943f65c019c76f12.scope.
Oct  8 05:52:54 np0005475493 podman[134943]: 2025-10-08 09:52:54.043124768 +0000 UTC m=+0.024984240 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:52:54 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:52:54 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab7240473297106459156914ebfeacb6da7fb2c0c68303f4a510df36dcbafec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:52:54 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab7240473297106459156914ebfeacb6da7fb2c0c68303f4a510df36dcbafec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:52:54 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab7240473297106459156914ebfeacb6da7fb2c0c68303f4a510df36dcbafec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:52:54 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab7240473297106459156914ebfeacb6da7fb2c0c68303f4a510df36dcbafec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:52:54 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab7240473297106459156914ebfeacb6da7fb2c0c68303f4a510df36dcbafec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:52:54 np0005475493 podman[134943]: 2025-10-08 09:52:54.168797468 +0000 UTC m=+0.150656970 container init d3e0c68d9da20e68a8ec0754135114be32524b84f3bdddd9943f65c019c76f12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_margulis, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:52:54 np0005475493 podman[134943]: 2025-10-08 09:52:54.18414622 +0000 UTC m=+0.166005672 container start d3e0c68d9da20e68a8ec0754135114be32524b84f3bdddd9943f65c019c76f12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct  8 05:52:54 np0005475493 podman[134943]: 2025-10-08 09:52:54.195214734 +0000 UTC m=+0.177074196 container attach d3e0c68d9da20e68a8ec0754135114be32524b84f3bdddd9943f65c019c76f12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:52:54 np0005475493 boring_margulis[134982]: --> passed data devices: 0 physical, 1 LVM
Oct  8 05:52:54 np0005475493 boring_margulis[134982]: --> All data devices are unavailable
Oct  8 05:52:54 np0005475493 systemd[1]: libpod-d3e0c68d9da20e68a8ec0754135114be32524b84f3bdddd9943f65c019c76f12.scope: Deactivated successfully.
Oct  8 05:52:54 np0005475493 podman[134943]: 2025-10-08 09:52:54.532082739 +0000 UTC m=+0.513942201 container died d3e0c68d9da20e68a8ec0754135114be32524b84f3bdddd9943f65c019c76f12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:52:54 np0005475493 systemd[1]: var-lib-containers-storage-overlay-eab7240473297106459156914ebfeacb6da7fb2c0c68303f4a510df36dcbafec-merged.mount: Deactivated successfully.
Oct  8 05:52:54 np0005475493 podman[134943]: 2025-10-08 09:52:54.581234152 +0000 UTC m=+0.563093604 container remove d3e0c68d9da20e68a8ec0754135114be32524b84f3bdddd9943f65c019c76f12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_margulis, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  8 05:52:54 np0005475493 systemd[1]: libpod-conmon-d3e0c68d9da20e68a8ec0754135114be32524b84f3bdddd9943f65c019c76f12.scope: Deactivated successfully.
Oct  8 05:52:54 np0005475493 python3.9[135123]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:52:55 np0005475493 podman[135357]: 2025-10-08 09:52:55.120810102 +0000 UTC m=+0.033250415 container create 720522e9ee6f0deb45e5b0700dcaf90703cb5b03e7c9c2c7e8d218b22d7e697d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_galois, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct  8 05:52:55 np0005475493 systemd[1]: Started libpod-conmon-720522e9ee6f0deb45e5b0700dcaf90703cb5b03e7c9c2c7e8d218b22d7e697d.scope.
Oct  8 05:52:55 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:52:55 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v236: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 267 B/s rd, 0 op/s
Oct  8 05:52:55 np0005475493 podman[135357]: 2025-10-08 09:52:55.190059467 +0000 UTC m=+0.102499800 container init 720522e9ee6f0deb45e5b0700dcaf90703cb5b03e7c9c2c7e8d218b22d7e697d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_galois, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  8 05:52:55 np0005475493 podman[135357]: 2025-10-08 09:52:55.197079302 +0000 UTC m=+0.109519615 container start 720522e9ee6f0deb45e5b0700dcaf90703cb5b03e7c9c2c7e8d218b22d7e697d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_galois, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct  8 05:52:55 np0005475493 podman[135357]: 2025-10-08 09:52:55.200536573 +0000 UTC m=+0.112976886 container attach 720522e9ee6f0deb45e5b0700dcaf90703cb5b03e7c9c2c7e8d218b22d7e697d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_galois, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  8 05:52:55 np0005475493 cranky_galois[135373]: 167 167
Oct  8 05:52:55 np0005475493 systemd[1]: libpod-720522e9ee6f0deb45e5b0700dcaf90703cb5b03e7c9c2c7e8d218b22d7e697d.scope: Deactivated successfully.
Oct  8 05:52:55 np0005475493 podman[135357]: 2025-10-08 09:52:55.201974728 +0000 UTC m=+0.114415041 container died 720522e9ee6f0deb45e5b0700dcaf90703cb5b03e7c9c2c7e8d218b22d7e697d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:52:55 np0005475493 podman[135357]: 2025-10-08 09:52:55.107428974 +0000 UTC m=+0.019869317 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:52:55 np0005475493 python3.9[135344]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917174.1658678-160-91309174977549/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ca91fd4512d7d0461b1179af92a523d933a341ea backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:52:55 np0005475493 systemd[1]: var-lib-containers-storage-overlay-9ead4f79fa5217edf2e401a91599a952ac02ddf406fdce28b26c3081b7fd2956-merged.mount: Deactivated successfully.
Oct  8 05:52:55 np0005475493 podman[135357]: 2025-10-08 09:52:55.251329898 +0000 UTC m=+0.163770251 container remove 720522e9ee6f0deb45e5b0700dcaf90703cb5b03e7c9c2c7e8d218b22d7e697d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:52:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:55 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:55 np0005475493 systemd[1]: libpod-conmon-720522e9ee6f0deb45e5b0700dcaf90703cb5b03e7c9c2c7e8d218b22d7e697d.scope: Deactivated successfully.
Oct  8 05:52:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:55.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:55 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed80032b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:55 np0005475493 podman[135443]: 2025-10-08 09:52:55.439242238 +0000 UTC m=+0.043439420 container create bfec76834dd8787349be3f1cf01d7fde31bc246c9efc31e3e7f0528f84502531 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_knuth, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  8 05:52:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:55.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:55 np0005475493 systemd[1]: Started libpod-conmon-bfec76834dd8787349be3f1cf01d7fde31bc246c9efc31e3e7f0528f84502531.scope.
Oct  8 05:52:55 np0005475493 podman[135443]: 2025-10-08 09:52:55.421442359 +0000 UTC m=+0.025639551 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:52:55 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:52:55 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3813cf5aa0c5227e51c7d3f448bd358144896f29f40027732233c94684a79a18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:52:55 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3813cf5aa0c5227e51c7d3f448bd358144896f29f40027732233c94684a79a18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:52:55 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3813cf5aa0c5227e51c7d3f448bd358144896f29f40027732233c94684a79a18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:52:55 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3813cf5aa0c5227e51c7d3f448bd358144896f29f40027732233c94684a79a18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:52:55 np0005475493 podman[135443]: 2025-10-08 09:52:55.560082183 +0000 UTC m=+0.164279375 container init bfec76834dd8787349be3f1cf01d7fde31bc246c9efc31e3e7f0528f84502531 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_knuth, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  8 05:52:55 np0005475493 podman[135443]: 2025-10-08 09:52:55.570974882 +0000 UTC m=+0.175172094 container start bfec76834dd8787349be3f1cf01d7fde31bc246c9efc31e3e7f0528f84502531 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_knuth, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:52:55 np0005475493 podman[135443]: 2025-10-08 09:52:55.574462084 +0000 UTC m=+0.178659266 container attach bfec76834dd8787349be3f1cf01d7fde31bc246c9efc31e3e7f0528f84502531 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:52:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:52:55] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Oct  8 05:52:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:52:55] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Oct  8 05:52:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:55 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]: {
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:    "1": [
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:        {
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:            "devices": [
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:                "/dev/loop3"
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:            ],
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:            "lv_name": "ceph_lv0",
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:            "lv_size": "21470642176",
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:            "name": "ceph_lv0",
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:            "tags": {
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:                "ceph.cephx_lockbox_secret": "",
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:                "ceph.cluster_name": "ceph",
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:                "ceph.crush_device_class": "",
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:                "ceph.encrypted": "0",
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:                "ceph.osd_id": "1",
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:                "ceph.type": "block",
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:                "ceph.vdo": "0",
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:                "ceph.with_tpm": "0"
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:            },
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:            "type": "block",
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:            "vg_name": "ceph_vg0"
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:        }
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]:    ]
Oct  8 05:52:55 np0005475493 interesting_knuth[135512]: }
Oct  8 05:52:55 np0005475493 systemd[1]: libpod-bfec76834dd8787349be3f1cf01d7fde31bc246c9efc31e3e7f0528f84502531.scope: Deactivated successfully.
Oct  8 05:52:55 np0005475493 podman[135443]: 2025-10-08 09:52:55.872526479 +0000 UTC m=+0.476723671 container died bfec76834dd8787349be3f1cf01d7fde31bc246c9efc31e3e7f0528f84502531 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_knuth, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  8 05:52:55 np0005475493 python3.9[135569]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:52:55 np0005475493 systemd[1]: var-lib-containers-storage-overlay-3813cf5aa0c5227e51c7d3f448bd358144896f29f40027732233c94684a79a18-merged.mount: Deactivated successfully.
Oct  8 05:52:55 np0005475493 podman[135443]: 2025-10-08 09:52:55.948278592 +0000 UTC m=+0.552475774 container remove bfec76834dd8787349be3f1cf01d7fde31bc246c9efc31e3e7f0528f84502531 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_knuth, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:52:55 np0005475493 systemd[1]: libpod-conmon-bfec76834dd8787349be3f1cf01d7fde31bc246c9efc31e3e7f0528f84502531.scope: Deactivated successfully.
Oct  8 05:52:56 np0005475493 python3.9[135761]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917175.3847125-160-65360322527584/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=ad35324b46d028e64dbb491e0ae0f5e3bb7a2175 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:52:56 np0005475493 podman[135826]: 2025-10-08 09:52:56.550525446 +0000 UTC m=+0.039042520 container create 5ef7bd5215d28d50d2888c371aa4605e67f8e0912fbeb03a30f4b8497b4f0b4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:52:56 np0005475493 systemd[1]: Started libpod-conmon-5ef7bd5215d28d50d2888c371aa4605e67f8e0912fbeb03a30f4b8497b4f0b4e.scope.
Oct  8 05:52:56 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:52:56 np0005475493 podman[135826]: 2025-10-08 09:52:56.628732179 +0000 UTC m=+0.117249283 container init 5ef7bd5215d28d50d2888c371aa4605e67f8e0912fbeb03a30f4b8497b4f0b4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_brown, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:52:56 np0005475493 podman[135826]: 2025-10-08 09:52:56.532933634 +0000 UTC m=+0.021450738 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:52:56 np0005475493 podman[135826]: 2025-10-08 09:52:56.640215365 +0000 UTC m=+0.128732429 container start 5ef7bd5215d28d50d2888c371aa4605e67f8e0912fbeb03a30f4b8497b4f0b4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_brown, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Oct  8 05:52:56 np0005475493 podman[135826]: 2025-10-08 09:52:56.64348533 +0000 UTC m=+0.132002404 container attach 5ef7bd5215d28d50d2888c371aa4605e67f8e0912fbeb03a30f4b8497b4f0b4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_brown, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:52:56 np0005475493 dreamy_brown[135843]: 167 167
Oct  8 05:52:56 np0005475493 systemd[1]: libpod-5ef7bd5215d28d50d2888c371aa4605e67f8e0912fbeb03a30f4b8497b4f0b4e.scope: Deactivated successfully.
Oct  8 05:52:56 np0005475493 podman[135826]: 2025-10-08 09:52:56.647122027 +0000 UTC m=+0.135639141 container died 5ef7bd5215d28d50d2888c371aa4605e67f8e0912fbeb03a30f4b8497b4f0b4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_brown, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  8 05:52:56 np0005475493 systemd[1]: var-lib-containers-storage-overlay-320d0fdaed1082cef5b7ec1a4af066075b1f2329f4b4d410544630978833fbfd-merged.mount: Deactivated successfully.
Oct  8 05:52:56 np0005475493 podman[135826]: 2025-10-08 09:52:56.701679652 +0000 UTC m=+0.190196746 container remove 5ef7bd5215d28d50d2888c371aa4605e67f8e0912fbeb03a30f4b8497b4f0b4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_brown, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:52:56 np0005475493 systemd[1]: libpod-conmon-5ef7bd5215d28d50d2888c371aa4605e67f8e0912fbeb03a30f4b8497b4f0b4e.scope: Deactivated successfully.
Oct  8 05:52:56 np0005475493 podman[135941]: 2025-10-08 09:52:56.859458609 +0000 UTC m=+0.040482195 container create b398eb90f4290cf7c92e5d1f3af8ff6dec8be23f56d2a5c4dda97cf553fe542b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Oct  8 05:52:56 np0005475493 systemd[1]: Started libpod-conmon-b398eb90f4290cf7c92e5d1f3af8ff6dec8be23f56d2a5c4dda97cf553fe542b.scope.
Oct  8 05:52:56 np0005475493 podman[135941]: 2025-10-08 09:52:56.841550666 +0000 UTC m=+0.022574272 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:52:56 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:52:56 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6340d649be06ddc9b77f366849e4019b2e358f1eec930b2af3c446accae33e37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:52:56 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6340d649be06ddc9b77f366849e4019b2e358f1eec930b2af3c446accae33e37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:52:56 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6340d649be06ddc9b77f366849e4019b2e358f1eec930b2af3c446accae33e37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:52:56 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6340d649be06ddc9b77f366849e4019b2e358f1eec930b2af3c446accae33e37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:52:56 np0005475493 podman[135941]: 2025-10-08 09:52:56.957730953 +0000 UTC m=+0.138754579 container init b398eb90f4290cf7c92e5d1f3af8ff6dec8be23f56d2a5c4dda97cf553fe542b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_dhawan, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:52:56 np0005475493 podman[135941]: 2025-10-08 09:52:56.965112609 +0000 UTC m=+0.146136235 container start b398eb90f4290cf7c92e5d1f3af8ff6dec8be23f56d2a5c4dda97cf553fe542b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  8 05:52:56 np0005475493 podman[135941]: 2025-10-08 09:52:56.968625631 +0000 UTC m=+0.149649227 container attach b398eb90f4290cf7c92e5d1f3af8ff6dec8be23f56d2a5c4dda97cf553fe542b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_dhawan, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct  8 05:52:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:56.976Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:52:57 np0005475493 python3.9[136013]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:52:57 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v237: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 267 B/s rd, 0 op/s
Oct  8 05:52:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:57 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee00040f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:57.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:57 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:57.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:57 np0005475493 lvm[136238]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 05:52:57 np0005475493 lvm[136238]: VG ceph_vg0 finished
Oct  8 05:52:57 np0005475493 strange_dhawan[135993]: {}
Oct  8 05:52:57 np0005475493 podman[135941]: 2025-10-08 09:52:57.643644564 +0000 UTC m=+0.824668190 container died b398eb90f4290cf7c92e5d1f3af8ff6dec8be23f56d2a5c4dda97cf553fe542b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_dhawan, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Oct  8 05:52:57 np0005475493 systemd[1]: libpod-b398eb90f4290cf7c92e5d1f3af8ff6dec8be23f56d2a5c4dda97cf553fe542b.scope: Deactivated successfully.
Oct  8 05:52:57 np0005475493 systemd[1]: libpod-b398eb90f4290cf7c92e5d1f3af8ff6dec8be23f56d2a5c4dda97cf553fe542b.scope: Consumed 1.095s CPU time.
Oct  8 05:52:57 np0005475493 systemd[1]: var-lib-containers-storage-overlay-6340d649be06ddc9b77f366849e4019b2e358f1eec930b2af3c446accae33e37-merged.mount: Deactivated successfully.
Oct  8 05:52:57 np0005475493 podman[135941]: 2025-10-08 09:52:57.706053961 +0000 UTC m=+0.887077547 container remove b398eb90f4290cf7c92e5d1f3af8ff6dec8be23f56d2a5c4dda97cf553fe542b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  8 05:52:57 np0005475493 python3.9[136236]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:52:57 np0005475493 systemd[1]: libpod-conmon-b398eb90f4290cf7c92e5d1f3af8ff6dec8be23f56d2a5c4dda97cf553fe542b.scope: Deactivated successfully.
Oct  8 05:52:57 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:52:57 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:52:57 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:52:57 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:52:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:57 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed80032b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:58 np0005475493 python3.9[136434]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:52:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:52:58 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:52:58 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:52:58 np0005475493 python3.9[136557]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917177.8831885-341-230264003955239/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=3f143f01cb342955611becbf857e62f04ecd5a97 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:52:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:52:58.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:52:59 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v238: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 356 B/s rd, 0 op/s
Oct  8 05:52:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:59 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:52:59.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:59 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:59 np0005475493 python3.9[136710]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:52:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:52:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:52:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:52:59.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:52:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:52:59 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:52:59 np0005475493 python3.9[136833]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917179.0224504-341-158103416950833/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=48470e628d65eda3076b7ed534cda7f3290d3587 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:53:00 np0005475493 python3.9[136986]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:53:01 np0005475493 python3.9[137109]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917180.1175025-341-167730293587877/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=c55d6c2cc7f81b34bd89a051ca87d4a2fe6fb78b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.186404) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917181186463, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 436, "num_deletes": 251, "total_data_size": 411254, "memory_usage": 419800, "flush_reason": "Manual Compaction"}
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Oct  8 05:53:01 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v239: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 267 B/s rd, 0 op/s
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917181254299, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 406477, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12762, "largest_seqno": 13197, "table_properties": {"data_size": 403969, "index_size": 608, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6103, "raw_average_key_size": 18, "raw_value_size": 398839, "raw_average_value_size": 1201, "num_data_blocks": 26, "num_entries": 332, "num_filter_entries": 332, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759917164, "oldest_key_time": 1759917164, "file_creation_time": 1759917181, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 67927 microseconds, and 1659 cpu microseconds.
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.254341) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 406477 bytes OK
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.254360) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.255733) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.255746) EVENT_LOG_v1 {"time_micros": 1759917181255742, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.255761) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 408599, prev total WAL file size 408599, number of live WAL files 2.
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.256206) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(396KB)], [29(13MB)]
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917181256323, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 15077840, "oldest_snapshot_seqno": -1}
Oct  8 05:53:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:01 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed80032b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:01.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:01 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:53:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:01.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4228 keys, 12630119 bytes, temperature: kUnknown
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917181506046, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 12630119, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12599370, "index_size": 19055, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 108333, "raw_average_key_size": 25, "raw_value_size": 12519588, "raw_average_value_size": 2961, "num_data_blocks": 804, "num_entries": 4228, "num_filter_entries": 4228, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759917181, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.506523) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 12630119 bytes
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.535767) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 60.3 rd, 50.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 14.0 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(68.2) write-amplify(31.1) OK, records in: 4743, records dropped: 515 output_compression: NoCompression
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.535803) EVENT_LOG_v1 {"time_micros": 1759917181535789, "job": 12, "event": "compaction_finished", "compaction_time_micros": 250041, "compaction_time_cpu_micros": 34227, "output_level": 6, "num_output_files": 1, "total_output_size": 12630119, "num_input_records": 4743, "num_output_records": 4228, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917181536669, "job": 12, "event": "table_file_deletion", "file_number": 31}
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917181540046, "job": 12, "event": "table_file_deletion", "file_number": 29}
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.256104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.540267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.540281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.540284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.540287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 05:53:01 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-09:53:01.540290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 05:53:01 np0005475493 python3.9[137287]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:53:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:01 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004130 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:02 np0005475493 python3.9[137440]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:53:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:53:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:53:03 np0005475493 python3.9[137592]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:53:03 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v240: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 267 B/s rd, 0 op/s
Oct  8 05:53:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:03 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:03 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  8 05:53:03 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2758 writes, 13K keys, 2758 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 2758 writes, 2758 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2758 writes, 13K keys, 2758 commit groups, 1.0 writes per commit group, ingest: 24.36 MB, 0.04 MB/s#012Interval WAL: 2758 writes, 2758 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     98.9      0.21              0.05         6    0.036       0      0       0.0       0.0#012  L6      1/0   12.05 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.0    119.5    104.5      0.60              0.15         5    0.120     21K   2300       0.0       0.0#012 Sum      1/0   12.05 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   4.0     88.2    103.0      0.81              0.19        11    0.074     21K   2300       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   4.0     88.6    103.4      0.81              0.19        10    0.081     21K   2300       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0    119.5    104.5      0.60              0.15         5    0.120     21K   2300       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    100.2      0.21              0.05         5    0.042       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     16.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.021, interval 0.021#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.8 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f7a1ce3350#2 capacity: 304.00 MB usage: 2.76 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(179,2.56 MB,0.840769%) FilterBlock(12,69.05 KB,0.0221805%) IndexBlock(12,139.64 KB,0.0448578%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  8 05:53:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:03.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:03 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed80032b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:03.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:53:03 np0005475493 python3.9[137717]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917182.6103377-514-35270683229774/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=03fb8466dc9bc88568994ca20bb9a6a853d6a7b1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:53:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:03 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec0021a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:04 np0005475493 python3.9[137870]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:53:04 np0005475493 python3.9[137993]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917183.6904774-514-99374613159652/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=48470e628d65eda3076b7ed534cda7f3290d3587 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:53:05 np0005475493 python3.9[138145]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:53:05 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v241: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct  8 05:53:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:05 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee00041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:53:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:05.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:53:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:05 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:05.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:53:05] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  8 05:53:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:53:05] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  8 05:53:05 np0005475493 python3.9[138270]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917184.7747977-514-64563649056360/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=418fd7eda72a3b52b4f2ef9bbd18a4fa7984c61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:53:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:05 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04008d10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:06.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:53:07 np0005475493 python3.9[138423]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:53:07 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v242: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:53:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:07 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec0021a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:53:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:07.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:53:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:07 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee00041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:53:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:07.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:53:07 np0005475493 python3.9[138576]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:53:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:07 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:08 np0005475493 python3.9[138700]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917187.1929162-711-69490102780985/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9b1ec9ef1baf0871d11fb19dd2fc6e37ec07cf31 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:53:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:53:08 np0005475493 python3.9[138852]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:53:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:08.853Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:53:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:08.853Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:53:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:08.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:53:09 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v243: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct  8 05:53:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:09 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04008d10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:09 np0005475493 python3.9[139005]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:53:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:53:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:09.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:53:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:09 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec0021a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:09.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:09 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee00041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:09 np0005475493 python3.9[139128]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917188.9388752-785-158450116942737/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9b1ec9ef1baf0871d11fb19dd2fc6e37ec07cf31 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:53:10 np0005475493 python3.9[139281]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:53:11 np0005475493 python3.9[139433]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:53:11 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v244: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:53:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:11 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:53:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:11.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:53:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:11 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04008d10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:11.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:11 np0005475493 python3.9[139557]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917190.6573083-849-62592092562032/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9b1ec9ef1baf0871d11fb19dd2fc6e37ec07cf31 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:53:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:11 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec0021a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:12 np0005475493 python3.9[139710]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:53:13 np0005475493 python3.9[139862]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:53:13 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v245: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:53:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:13 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04008d10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:53:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:13.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:53:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:13 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee00041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:13.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:53:13 np0005475493 python3.9[139986]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917192.5912046-924-182610053387294/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9b1ec9ef1baf0871d11fb19dd2fc6e37ec07cf31 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:53:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:13 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:14 np0005475493 python3.9[140139]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:53:14 np0005475493 python3.9[140291]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:53:15 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v246: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct  8 05:53:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec0037d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:15.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04008d10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:53:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:15.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:53:15 np0005475493 python3.9[140415]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917194.5153232-1000-16154548506359/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9b1ec9ef1baf0871d11fb19dd2fc6e37ec07cf31 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:53:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:53:15] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  8 05:53:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:53:15] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  8 05:53:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee00041e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:16 np0005475493 python3.9[140568]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:53:16 np0005475493 python3.9[140720]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:53:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:16.978Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:53:17 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v247: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:53:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:17 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:17.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:17 np0005475493 python3.9[140844]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917196.425451-1070-95273254218371/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9b1ec9ef1baf0871d11fb19dd2fc6e37ec07cf31 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:53:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:17 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec0037d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:53:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:17.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:53:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:53:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:53:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:17 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04008d10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:53:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:53:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:53:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:53:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:53:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:53:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:53:18 np0005475493 systemd[1]: session-49.scope: Deactivated successfully.
Oct  8 05:53:18 np0005475493 systemd[1]: session-49.scope: Consumed 22.408s CPU time.
Oct  8 05:53:18 np0005475493 systemd-logind[798]: Session 49 logged out. Waiting for processes to exit.
Oct  8 05:53:18 np0005475493 systemd-logind[798]: Removed session 49.
Oct  8 05:53:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:18.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:53:19 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v248: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct  8 05:53:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:19 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f04008d10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:19.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:19 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:53:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:19.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:53:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:19 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:21 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v249: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:53:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:21.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a6e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:21.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004220 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:23 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v250: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:53:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:23 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:53:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:23.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:53:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:23 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:53:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:23.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:23 np0005475493 systemd-logind[798]: New session 50 of user zuul.
Oct  8 05:53:23 np0005475493 systemd[1]: Started Session 50 of User zuul.
Oct  8 05:53:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:23 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a6e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:24 np0005475493 python3.9[141057]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:53:25 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v251: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct  8 05:53:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:25 np0005475493 python3.9[141210]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:53:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:25.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:53:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:25.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:53:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:53:25] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Oct  8 05:53:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:53:25] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Oct  8 05:53:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:26 np0005475493 python3.9[141333]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917204.716148-62-186037131184901/.source.conf _original_basename=ceph.conf follow=False checksum=3890a3deab572d09518a0c50863eda009c004945 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:53:26 np0005475493 python3.9[141486]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:53:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:26.980Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:53:27 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v252: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:53:27 np0005475493 python3.9[141610]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917206.2224674-62-118677937392417/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=fbda66f5b6d5a9cd8683861e87e5a427d546a56c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:53:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:27 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:27.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:27 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:53:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:27.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:53:27 np0005475493 systemd[1]: session-50.scope: Deactivated successfully.
Oct  8 05:53:27 np0005475493 systemd[1]: session-50.scope: Consumed 2.827s CPU time.
Oct  8 05:53:27 np0005475493 systemd-logind[798]: Session 50 logged out. Waiting for processes to exit.
Oct  8 05:53:27 np0005475493 systemd-logind[798]: Removed session 50.
Oct  8 05:53:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:27 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:53:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:28.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:53:29 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v253: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct  8 05:53:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:29 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a6e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:29.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:29 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:53:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:29.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:53:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:29 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:31 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v254: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:53:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:31.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:53:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:31.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:53:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:32 np0005475493 systemd-logind[798]: New session 51 of user zuul.
Oct  8 05:53:32 np0005475493 systemd[1]: Started Session 51 of User zuul.
Oct  8 05:53:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:53:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:53:33 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v255: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:53:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:33 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:53:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:33.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:53:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:33 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a6e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:53:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:53:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:33.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:53:33 np0005475493 python3.9[141794]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:53:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:33 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:34 np0005475493 python3.9[141951]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:53:35 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v256: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct  8 05:53:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:35 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:53:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:35.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:53:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:35 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:35 np0005475493 python3.9[142104]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:53:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:35.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:53:35] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Oct  8 05:53:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:53:35] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Oct  8 05:53:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:35 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:36 np0005475493 python3.9[142255]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:53:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:36.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:53:37 np0005475493 python3.9[142407]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct  8 05:53:37 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v257: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:53:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:37 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:37.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:37 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:37.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:37 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a6e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:53:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:38.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:53:39 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v258: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct  8 05:53:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:39 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a6e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:53:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:39.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:53:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:39 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a6e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:53:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:39.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:53:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:39 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:40 np0005475493 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct  8 05:53:40 np0005475493 python3.9[142567]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  8 05:53:41 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v259: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:53:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:41 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:41.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:41 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:41.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:41 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a6e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:41 np0005475493 python3.9[142677]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  8 05:53:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Oct  8 05:53:43 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v260: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:53:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:43 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:43.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:43 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:53:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:43.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:43 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:44 np0005475493 python3.9[142833]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  8 05:53:45 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v261: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct  8 05:53:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a6e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:45.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:45.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:45 np0005475493 python3[142989]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct  8 05:53:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:53:45] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Oct  8 05:53:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:53:45] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Oct  8 05:53:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:46 np0005475493 python3.9[143142]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:53:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:46.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v262: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:53:47 np0005475493 python3.9[143295]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:53:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:47 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:47.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:47 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9f0400a6e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:47.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:53:47
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.meta', 'backups', 'vms', 'volumes', '.nfs']
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 05:53:47 np0005475493 python3.9[143373]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:53:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:53:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:53:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:47 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ee0004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:53:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:53:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 05:53:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:53:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:53:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:53:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:53:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:53:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:53:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:53:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:53:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:53:48 np0005475493 python3.9[143526]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:53:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:48.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:53:49 np0005475493 python3.9[143604]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.zvs6eopd recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:53:49 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v263: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct  8 05:53:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:49 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:53:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:49.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:53:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:49 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:49.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:49 np0005475493 python3.9[143759]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:53:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:49 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:50 np0005475493 python3.9[143838]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:53:51 np0005475493 python3.9[143990]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:53:51 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v264: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:53:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec001ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:51.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:51.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:51 np0005475493 python3[144144]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  8 05:53:52 np0005475493 python3.9[144297]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:53:53 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v265: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:53:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:53 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:53.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:53 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec001ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:53:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:53.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:53 np0005475493 python3.9[144423]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917232.3561835-431-261950654288084/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:53:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:53 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:54 np0005475493 python3.9[144576]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:53:55 np0005475493 python3.9[144701]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917233.9144201-476-193822884529516/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:53:55 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v266: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct  8 05:53:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:55 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:55.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:55 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:55.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:53:55] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Oct  8 05:53:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:53:55] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Oct  8 05:53:55 np0005475493 python3.9[144856]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:53:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:55 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:56 np0005475493 python3.9[144982]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917235.270399-521-258401605318005/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:53:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:56.983Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:53:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:56.983Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:53:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:56.983Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:53:57 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v267: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:53:57 np0005475493 python3.9[145135]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:53:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:57 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:57.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:57 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8001080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:57.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:57 np0005475493 python3.9[145260]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917236.766317-566-270897633463533/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:53:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:57 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:53:58 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:53:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:53:58 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:53:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:53:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:53:58 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:53:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:53:58 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:53:58 np0005475493 python3.9[145484]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:53:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:53:58.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:53:59 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:53:59 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:53:59 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:53:59 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:53:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:53:59 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:53:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 05:53:59 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:53:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 05:53:59 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v268: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 343 B/s rd, 0 op/s
Oct  8 05:53:59 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:53:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 05:53:59 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:53:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 05:53:59 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 05:53:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 05:53:59 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:53:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:53:59 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:53:59 np0005475493 python3.9[145693]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917238.1312068-611-150404884701521/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:53:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:59 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:53:59.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:59 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:53:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:53:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:53:59.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:53:59 np0005475493 podman[145884]: 2025-10-08 09:53:59.708353284 +0000 UTC m=+0.051124158 container create 1e09e85eca49e7348e5d64ddfe5c7a8c0fe1a83f6e5dd5a616a65cb80a4cefc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:53:59 np0005475493 systemd[1]: Started libpod-conmon-1e09e85eca49e7348e5d64ddfe5c7a8c0fe1a83f6e5dd5a616a65cb80a4cefc5.scope.
Oct  8 05:53:59 np0005475493 podman[145884]: 2025-10-08 09:53:59.688163578 +0000 UTC m=+0.030934432 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:53:59 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:53:59 np0005475493 podman[145884]: 2025-10-08 09:53:59.823440854 +0000 UTC m=+0.166211708 container init 1e09e85eca49e7348e5d64ddfe5c7a8c0fe1a83f6e5dd5a616a65cb80a4cefc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_curran, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  8 05:53:59 np0005475493 podman[145884]: 2025-10-08 09:53:59.83151865 +0000 UTC m=+0.174289484 container start 1e09e85eca49e7348e5d64ddfe5c7a8c0fe1a83f6e5dd5a616a65cb80a4cefc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_curran, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct  8 05:53:59 np0005475493 podman[145884]: 2025-10-08 09:53:59.834891541 +0000 UTC m=+0.177662375 container attach 1e09e85eca49e7348e5d64ddfe5c7a8c0fe1a83f6e5dd5a616a65cb80a4cefc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_curran, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  8 05:53:59 np0005475493 systemd[1]: libpod-1e09e85eca49e7348e5d64ddfe5c7a8c0fe1a83f6e5dd5a616a65cb80a4cefc5.scope: Deactivated successfully.
Oct  8 05:53:59 np0005475493 determined_curran[145952]: 167 167
Oct  8 05:53:59 np0005475493 conmon[145952]: conmon 1e09e85eca49e7348e5d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1e09e85eca49e7348e5d64ddfe5c7a8c0fe1a83f6e5dd5a616a65cb80a4cefc5.scope/container/memory.events
Oct  8 05:53:59 np0005475493 podman[145884]: 2025-10-08 09:53:59.838699477 +0000 UTC m=+0.181470311 container died 1e09e85eca49e7348e5d64ddfe5c7a8c0fe1a83f6e5dd5a616a65cb80a4cefc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_curran, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:53:59 np0005475493 systemd[1]: var-lib-containers-storage-overlay-743054a477bf8633542c0f3e5bbe82fc0b267f95270242f509e30c63bf0db02c-merged.mount: Deactivated successfully.
Oct  8 05:53:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:53:59 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8001080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:53:59 np0005475493 podman[145884]: 2025-10-08 09:53:59.885715549 +0000 UTC m=+0.228486383 container remove 1e09e85eca49e7348e5d64ddfe5c7a8c0fe1a83f6e5dd5a616a65cb80a4cefc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_curran, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct  8 05:53:59 np0005475493 systemd[1]: libpod-conmon-1e09e85eca49e7348e5d64ddfe5c7a8c0fe1a83f6e5dd5a616a65cb80a4cefc5.scope: Deactivated successfully.
Oct  8 05:53:59 np0005475493 python3.9[145954]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:54:00 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:54:00 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:54:00 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:54:00 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:54:00 np0005475493 podman[145978]: 2025-10-08 09:54:00.053980883 +0000 UTC m=+0.062370940 container create 40a97aa94b96ef52677f0698987f488703ed2470de048fd975f32b156d01402c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_williamson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:54:00 np0005475493 systemd[1]: Started libpod-conmon-40a97aa94b96ef52677f0698987f488703ed2470de048fd975f32b156d01402c.scope.
Oct  8 05:54:00 np0005475493 podman[145978]: 2025-10-08 09:54:00.025681189 +0000 UTC m=+0.034071296 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:54:00 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:54:00 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34241b96de75b6e577e1e3d0292de8e05c8090924926b0ed05e720c56052528d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:54:00 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34241b96de75b6e577e1e3d0292de8e05c8090924926b0ed05e720c56052528d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:54:00 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34241b96de75b6e577e1e3d0292de8e05c8090924926b0ed05e720c56052528d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:54:00 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34241b96de75b6e577e1e3d0292de8e05c8090924926b0ed05e720c56052528d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:54:00 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34241b96de75b6e577e1e3d0292de8e05c8090924926b0ed05e720c56052528d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:54:00 np0005475493 podman[145978]: 2025-10-08 09:54:00.170290063 +0000 UTC m=+0.178680150 container init 40a97aa94b96ef52677f0698987f488703ed2470de048fd975f32b156d01402c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_williamson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:54:00 np0005475493 podman[145978]: 2025-10-08 09:54:00.179149995 +0000 UTC m=+0.187540052 container start 40a97aa94b96ef52677f0698987f488703ed2470de048fd975f32b156d01402c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:54:00 np0005475493 podman[145978]: 2025-10-08 09:54:00.194699558 +0000 UTC m=+0.203089645 container attach 40a97aa94b96ef52677f0698987f488703ed2470de048fd975f32b156d01402c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:54:00 np0005475493 funny_williamson[146020]: --> passed data devices: 0 physical, 1 LVM
Oct  8 05:54:00 np0005475493 funny_williamson[146020]: --> All data devices are unavailable
Oct  8 05:54:00 np0005475493 systemd[1]: libpod-40a97aa94b96ef52677f0698987f488703ed2470de048fd975f32b156d01402c.scope: Deactivated successfully.
Oct  8 05:54:00 np0005475493 podman[145978]: 2025-10-08 09:54:00.534494395 +0000 UTC m=+0.542884482 container died 40a97aa94b96ef52677f0698987f488703ed2470de048fd975f32b156d01402c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_williamson, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  8 05:54:00 np0005475493 systemd[1]: var-lib-containers-storage-overlay-34241b96de75b6e577e1e3d0292de8e05c8090924926b0ed05e720c56052528d-merged.mount: Deactivated successfully.
Oct  8 05:54:00 np0005475493 podman[145978]: 2025-10-08 09:54:00.623459372 +0000 UTC m=+0.631849429 container remove 40a97aa94b96ef52677f0698987f488703ed2470de048fd975f32b156d01402c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_williamson, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:54:00 np0005475493 systemd[1]: libpod-conmon-40a97aa94b96ef52677f0698987f488703ed2470de048fd975f32b156d01402c.scope: Deactivated successfully.
Oct  8 05:54:00 np0005475493 python3.9[146161]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:54:01 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v269: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 257 B/s rd, 0 op/s
Oct  8 05:54:01 np0005475493 podman[146347]: 2025-10-08 09:54:01.276306693 +0000 UTC m=+0.115160483 container create 9274abf9b107d784a397ac74f32150d02c50b569e164d04663f4da1229bbc6ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_curie, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct  8 05:54:01 np0005475493 podman[146347]: 2025-10-08 09:54:01.18565325 +0000 UTC m=+0.024507090 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:54:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:01 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:01 np0005475493 systemd[1]: Started libpod-conmon-9274abf9b107d784a397ac74f32150d02c50b569e164d04663f4da1229bbc6ad.scope.
Oct  8 05:54:01 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:54:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:01.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:01 np0005475493 podman[146347]: 2025-10-08 09:54:01.456520581 +0000 UTC m=+0.295374421 container init 9274abf9b107d784a397ac74f32150d02c50b569e164d04663f4da1229bbc6ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Oct  8 05:54:01 np0005475493 podman[146347]: 2025-10-08 09:54:01.464561246 +0000 UTC m=+0.303414996 container start 9274abf9b107d784a397ac74f32150d02c50b569e164d04663f4da1229bbc6ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:54:01 np0005475493 quirky_curie[146398]: 167 167
Oct  8 05:54:01 np0005475493 systemd[1]: libpod-9274abf9b107d784a397ac74f32150d02c50b569e164d04663f4da1229bbc6ad.scope: Deactivated successfully.
Oct  8 05:54:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:01 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:01 np0005475493 podman[146347]: 2025-10-08 09:54:01.499890063 +0000 UTC m=+0.338743903 container attach 9274abf9b107d784a397ac74f32150d02c50b569e164d04663f4da1229bbc6ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_curie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  8 05:54:01 np0005475493 podman[146347]: 2025-10-08 09:54:01.501154674 +0000 UTC m=+0.340008444 container died 9274abf9b107d784a397ac74f32150d02c50b569e164d04663f4da1229bbc6ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_curie, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:54:01 np0005475493 systemd[1]: var-lib-containers-storage-overlay-e0774eacbe220f0970995c0f21d551887b66aefb3de69dca0b0d477efc4f2508-merged.mount: Deactivated successfully.
Oct  8 05:54:01 np0005475493 podman[146347]: 2025-10-08 09:54:01.552383575 +0000 UTC m=+0.391237345 container remove 9274abf9b107d784a397ac74f32150d02c50b569e164d04663f4da1229bbc6ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  8 05:54:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:01.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:01 np0005475493 systemd[1]: libpod-conmon-9274abf9b107d784a397ac74f32150d02c50b569e164d04663f4da1229bbc6ad.scope: Deactivated successfully.
Oct  8 05:54:01 np0005475493 python3.9[146476]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:54:01 np0005475493 podman[146488]: 2025-10-08 09:54:01.759859874 +0000 UTC m=+0.068261094 container create 6da0becab4243fdaeb9fde1a8fd8405a0d397952e3af14a6afa481d2a5340abb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:54:01 np0005475493 systemd[1]: Started libpod-conmon-6da0becab4243fdaeb9fde1a8fd8405a0d397952e3af14a6afa481d2a5340abb.scope.
Oct  8 05:54:01 np0005475493 podman[146488]: 2025-10-08 09:54:01.733266906 +0000 UTC m=+0.041668166 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:54:01 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:54:01 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b128eda9ede4fbd9a5c3a72694b00b1ae4a2be8ee7d996d5dfec8afd3b4159c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:54:01 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b128eda9ede4fbd9a5c3a72694b00b1ae4a2be8ee7d996d5dfec8afd3b4159c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:54:01 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b128eda9ede4fbd9a5c3a72694b00b1ae4a2be8ee7d996d5dfec8afd3b4159c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:54:01 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b128eda9ede4fbd9a5c3a72694b00b1ae4a2be8ee7d996d5dfec8afd3b4159c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:54:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:01 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:01 np0005475493 podman[146488]: 2025-10-08 09:54:01.874938573 +0000 UTC m=+0.183339843 container init 6da0becab4243fdaeb9fde1a8fd8405a0d397952e3af14a6afa481d2a5340abb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lovelace, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  8 05:54:01 np0005475493 podman[146488]: 2025-10-08 09:54:01.882240944 +0000 UTC m=+0.190642154 container start 6da0becab4243fdaeb9fde1a8fd8405a0d397952e3af14a6afa481d2a5340abb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lovelace, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:54:01 np0005475493 podman[146488]: 2025-10-08 09:54:01.885967017 +0000 UTC m=+0.194368287 container attach 6da0becab4243fdaeb9fde1a8fd8405a0d397952e3af14a6afa481d2a5340abb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]: {
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:    "1": [
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:        {
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:            "devices": [
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:                "/dev/loop3"
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:            ],
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:            "lv_name": "ceph_lv0",
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:            "lv_size": "21470642176",
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:            "name": "ceph_lv0",
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:            "tags": {
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:                "ceph.cephx_lockbox_secret": "",
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:                "ceph.cluster_name": "ceph",
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:                "ceph.crush_device_class": "",
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:                "ceph.encrypted": "0",
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:                "ceph.osd_id": "1",
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:                "ceph.type": "block",
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:                "ceph.vdo": "0",
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:                "ceph.with_tpm": "0"
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:            },
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:            "type": "block",
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:            "vg_name": "ceph_vg0"
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:        }
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]:    ]
Oct  8 05:54:02 np0005475493 relaxed_lovelace[146513]: }
Oct  8 05:54:02 np0005475493 systemd[1]: libpod-6da0becab4243fdaeb9fde1a8fd8405a0d397952e3af14a6afa481d2a5340abb.scope: Deactivated successfully.
Oct  8 05:54:02 np0005475493 podman[146488]: 2025-10-08 09:54:02.206521508 +0000 UTC m=+0.514922728 container died 6da0becab4243fdaeb9fde1a8fd8405a0d397952e3af14a6afa481d2a5340abb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct  8 05:54:02 np0005475493 systemd[1]: var-lib-containers-storage-overlay-b128eda9ede4fbd9a5c3a72694b00b1ae4a2be8ee7d996d5dfec8afd3b4159c9-merged.mount: Deactivated successfully.
Oct  8 05:54:02 np0005475493 podman[146488]: 2025-10-08 09:54:02.264282566 +0000 UTC m=+0.572683796 container remove 6da0becab4243fdaeb9fde1a8fd8405a0d397952e3af14a6afa481d2a5340abb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lovelace, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:54:02 np0005475493 systemd[1]: libpod-conmon-6da0becab4243fdaeb9fde1a8fd8405a0d397952e3af14a6afa481d2a5340abb.scope: Deactivated successfully.
Oct  8 05:54:02 np0005475493 python3.9[146680]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:54:02 np0005475493 podman[146795]: 2025-10-08 09:54:02.758051344 +0000 UTC m=+0.038928826 container create 1c5d1953349eb02ab1d29e03ff1de465b69a9c4fc1431b8dd046636b60409830 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  8 05:54:02 np0005475493 systemd[1]: Started libpod-conmon-1c5d1953349eb02ab1d29e03ff1de465b69a9c4fc1431b8dd046636b60409830.scope.
Oct  8 05:54:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:54:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:54:02 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:54:02 np0005475493 podman[146795]: 2025-10-08 09:54:02.830699382 +0000 UTC m=+0.111576874 container init 1c5d1953349eb02ab1d29e03ff1de465b69a9c4fc1431b8dd046636b60409830 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_goodall, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:54:02 np0005475493 podman[146795]: 2025-10-08 09:54:02.740176445 +0000 UTC m=+0.021053957 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:54:02 np0005475493 podman[146795]: 2025-10-08 09:54:02.836961739 +0000 UTC m=+0.117839221 container start 1c5d1953349eb02ab1d29e03ff1de465b69a9c4fc1431b8dd046636b60409830 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  8 05:54:02 np0005475493 brave_goodall[146846]: 167 167
Oct  8 05:54:02 np0005475493 systemd[1]: libpod-1c5d1953349eb02ab1d29e03ff1de465b69a9c4fc1431b8dd046636b60409830.scope: Deactivated successfully.
Oct  8 05:54:02 np0005475493 podman[146795]: 2025-10-08 09:54:02.841961444 +0000 UTC m=+0.122838926 container attach 1c5d1953349eb02ab1d29e03ff1de465b69a9c4fc1431b8dd046636b60409830 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct  8 05:54:02 np0005475493 podman[146795]: 2025-10-08 09:54:02.842263474 +0000 UTC m=+0.123140956 container died 1c5d1953349eb02ab1d29e03ff1de465b69a9c4fc1431b8dd046636b60409830 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_goodall, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:54:02 np0005475493 systemd[1]: var-lib-containers-storage-overlay-0b090f1cc8af709c95def9ecf2c7ea915ac9df236554c08d3d2e0cfd99320b23-merged.mount: Deactivated successfully.
Oct  8 05:54:02 np0005475493 podman[146795]: 2025-10-08 09:54:02.877715055 +0000 UTC m=+0.158592537 container remove 1c5d1953349eb02ab1d29e03ff1de465b69a9c4fc1431b8dd046636b60409830 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_goodall, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:54:02 np0005475493 systemd[1]: libpod-conmon-1c5d1953349eb02ab1d29e03ff1de465b69a9c4fc1431b8dd046636b60409830.scope: Deactivated successfully.
Oct  8 05:54:03 np0005475493 podman[146936]: 2025-10-08 09:54:03.026087263 +0000 UTC m=+0.043464827 container create a8656a9b92451e8f7491a31fe6975423bed486c020acb65f89cc2ddb02873e92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_clarke, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:54:03 np0005475493 systemd[1]: Started libpod-conmon-a8656a9b92451e8f7491a31fe6975423bed486c020acb65f89cc2ddb02873e92.scope.
Oct  8 05:54:03 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:54:03 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1d670a9dcd8067784a010ab8f4ea1ab886c4a42954512b30dd6500b9f0a2b85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:54:03 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1d670a9dcd8067784a010ab8f4ea1ab886c4a42954512b30dd6500b9f0a2b85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:54:03 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1d670a9dcd8067784a010ab8f4ea1ab886c4a42954512b30dd6500b9f0a2b85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:54:03 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1d670a9dcd8067784a010ab8f4ea1ab886c4a42954512b30dd6500b9f0a2b85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:54:03 np0005475493 podman[146936]: 2025-10-08 09:54:03.09478839 +0000 UTC m=+0.112165964 container init a8656a9b92451e8f7491a31fe6975423bed486c020acb65f89cc2ddb02873e92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:54:03 np0005475493 podman[146936]: 2025-10-08 09:54:03.00692968 +0000 UTC m=+0.024307284 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:54:03 np0005475493 podman[146936]: 2025-10-08 09:54:03.103615692 +0000 UTC m=+0.120993256 container start a8656a9b92451e8f7491a31fe6975423bed486c020acb65f89cc2ddb02873e92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_clarke, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:54:03 np0005475493 podman[146936]: 2025-10-08 09:54:03.107745277 +0000 UTC m=+0.125122871 container attach a8656a9b92451e8f7491a31fe6975423bed486c020acb65f89cc2ddb02873e92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_clarke, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:54:03 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v270: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 257 B/s rd, 0 op/s
Oct  8 05:54:03 np0005475493 python3.9[146975]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:54:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:03 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:03.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:03 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:54:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:03.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:03 np0005475493 lvm[147184]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 05:54:03 np0005475493 lvm[147184]: VG ceph_vg0 finished
Oct  8 05:54:03 np0005475493 trusting_clarke[146979]: {}
Oct  8 05:54:03 np0005475493 podman[146936]: 2025-10-08 09:54:03.807892539 +0000 UTC m=+0.825270103 container died a8656a9b92451e8f7491a31fe6975423bed486c020acb65f89cc2ddb02873e92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_clarke, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  8 05:54:03 np0005475493 systemd[1]: libpod-a8656a9b92451e8f7491a31fe6975423bed486c020acb65f89cc2ddb02873e92.scope: Deactivated successfully.
Oct  8 05:54:03 np0005475493 systemd[1]: libpod-a8656a9b92451e8f7491a31fe6975423bed486c020acb65f89cc2ddb02873e92.scope: Consumed 1.015s CPU time.
Oct  8 05:54:03 np0005475493 systemd[1]: var-lib-containers-storage-overlay-a1d670a9dcd8067784a010ab8f4ea1ab886c4a42954512b30dd6500b9f0a2b85-merged.mount: Deactivated successfully.
Oct  8 05:54:03 np0005475493 podman[146936]: 2025-10-08 09:54:03.864874221 +0000 UTC m=+0.882251795 container remove a8656a9b92451e8f7491a31fe6975423bed486c020acb65f89cc2ddb02873e92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_clarke, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:54:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:03 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:03 np0005475493 systemd[1]: libpod-conmon-a8656a9b92451e8f7491a31fe6975423bed486c020acb65f89cc2ddb02873e92.scope: Deactivated successfully.
Oct  8 05:54:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:54:03 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:54:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:54:03 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:54:03 np0005475493 python3.9[147210]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:54:04 np0005475493 python3.9[147403]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:54:04 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:54:04 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:54:05 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v271: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 429 B/s rd, 0 op/s
Oct  8 05:54:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:05 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:05.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:05 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:54:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:05.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:54:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:54:05] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Oct  8 05:54:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:54:05] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Oct  8 05:54:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:05 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:05 np0005475493 python3.9[147554]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:54:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:06.984Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:54:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:06.985Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:54:07 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v272: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 257 B/s rd, 0 op/s
Oct  8 05:54:07 np0005475493 python3.9[147709]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:d8:76:c8:90" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:54:07 np0005475493 ovs-vsctl[147710]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:d8:76:c8:90 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct  8 05:54:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:07 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:54:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:07.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:54:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:07 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:54:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:07.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:54:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:07 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:07 np0005475493 python3.9[147862]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:54:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:54:08 np0005475493 python3.9[148018]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:54:08 np0005475493 ovs-vsctl[148019]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct  8 05:54:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:08.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:54:09 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v273: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 343 B/s rd, 0 op/s
Oct  8 05:54:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:09 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:09 np0005475493 python3.9[148170]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:54:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:54:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:09.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:54:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:09 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:09.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:09 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8003a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:10 np0005475493 python3.9[148325]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:54:11 np0005475493 python3.9[148477]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:54:11 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v274: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:54:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:11 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:54:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:11.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:54:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:11 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:11 np0005475493 python3.9[148556]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:54:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:54:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:11.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:54:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:11 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:12 np0005475493 python3.9[148709]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:54:12 np0005475493 python3.9[148787]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:54:13 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v275: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:54:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:13 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8003a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:54:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:13.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:54:13 np0005475493 python3.9[148940]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:54:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:13 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:54:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:13.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:13 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:14 np0005475493 python3.9[149093]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:54:14 np0005475493 python3.9[149171]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:54:15 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v276: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct  8 05:54:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:15 np0005475493 python3.9[149324]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:54:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:15.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8003a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:54:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:15.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:54:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:54:15] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Oct  8 05:54:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:54:15] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Oct  8 05:54:15 np0005475493 python3.9[149402]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:54:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:15 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095416 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 05:54:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:16.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:54:17 np0005475493 python3.9[149555]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:54:17 np0005475493 systemd[1]: Reloading.
Oct  8 05:54:17 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v277: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 05:54:17 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:54:17 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:54:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:17 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:17.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:17 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:17.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:54:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:54:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:17 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8004720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:54:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:54:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:54:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:54:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:54:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:54:18 np0005475493 python3.9[149747]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:54:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:54:18 np0005475493 python3.9[149825]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:54:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:18.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:54:19 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v278: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:54:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:19 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:19 np0005475493 python3.9[149978]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:54:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:19.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:19 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:19.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:19 np0005475493 python3.9[150056]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:54:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:19 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:20 np0005475493 python3.9[150209]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:54:20 np0005475493 systemd[1]: Reloading.
Oct  8 05:54:20 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:54:20 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:54:21 np0005475493 systemd[1]: Starting Create netns directory...
Oct  8 05:54:21 np0005475493 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  8 05:54:21 np0005475493 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  8 05:54:21 np0005475493 systemd[1]: Finished Create netns directory.
Oct  8 05:54:21 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v279: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 05:54:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8004720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:21.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:21.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:21 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:21 np0005475493 python3.9[150429]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:54:22 np0005475493 python3.9[150582]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:54:23 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v280: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 05:54:23 np0005475493 python3.9[150706]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917262.210522-1364-91997752062573/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:54:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:23 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:54:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:23.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:54:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:23 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8004720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:54:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:23.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:23 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:24 np0005475493 python3.9[150859]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:54:24 np0005475493 python3.9[151011]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:54:25 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v281: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct  8 05:54:25 np0005475493 python3.9[151135]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917264.3798847-1439-215924471615963/.source.json _original_basename=.ast15ltk follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:54:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:25.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:54:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:25.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:54:25] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  8 05:54:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:54:25] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  8 05:54:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:25 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8004720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:26 np0005475493 python3.9[151288]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:54:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:26.986Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:54:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:26.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:54:27 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v282: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct  8 05:54:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:27 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:27.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:27 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004180 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:54:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:27.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:54:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:27 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec001ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:28 np0005475493 python3.9[151719]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct  8 05:54:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:54:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:28 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 05:54:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:28 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:54:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:28.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:54:29 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v283: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 05:54:29 np0005475493 python3.9[151871]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  8 05:54:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:29 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:29.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:29 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8004720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:29.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:29 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:30 np0005475493 python3.9[152025]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  8 05:54:31 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v284: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 05:54:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec001ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:54:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:31.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:54:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 05:54:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:31.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:31 np0005475493 python3[152204]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  8 05:54:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:31 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8004720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:31 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  8 05:54:31 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 8245 writes, 33K keys, 8245 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 8245 writes, 1525 syncs, 5.41 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8245 writes, 33K keys, 8245 commit groups, 1.0 writes per commit group, ingest: 21.32 MB, 0.04 MB/s#012Interval WAL: 8245 writes, 1525 syncs, 5.41 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct  8 05:54:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:54:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:54:33 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v285: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 05:54:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:33 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:33.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:33 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec001ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:54:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:33.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:33 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:35 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v286: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 05:54:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:35 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8004720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:35.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:35 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:35.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:54:35] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  8 05:54:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:54:35] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  8 05:54:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:35 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec001ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:36 np0005475493 podman[152219]: 2025-10-08 09:54:36.952389881 +0000 UTC m=+5.102858281 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct  8 05:54:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:36.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:54:37 np0005475493 podman[152341]: 2025-10-08 09:54:37.093335408 +0000 UTC m=+0.047892519 container create 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Oct  8 05:54:37 np0005475493 podman[152341]: 2025-10-08 09:54:37.066160697 +0000 UTC m=+0.020717828 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct  8 05:54:37 np0005475493 python3[152204]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct  8 05:54:37 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v287: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct  8 05:54:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:37 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:37.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:37 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef8004720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:37.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:37 np0005475493 python3.9[152532]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:54:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:37 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095438 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 05:54:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:54:38 np0005475493 python3.9[152687]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:54:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:38.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:54:39 np0005475493 python3.9[152763]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:54:39 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v288: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct  8 05:54:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:39 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec001ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:54:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:39.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:54:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:39 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:39.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:39 np0005475493 python3.9[152915]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759917279.1625543-1703-115701383212134/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:54:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:39 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef800c300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:40 np0005475493 python3.9[152992]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  8 05:54:40 np0005475493 systemd[1]: Reloading.
Oct  8 05:54:40 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:54:40 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:54:41 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v289: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct  8 05:54:41 np0005475493 python3.9[153105]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:54:41 np0005475493 systemd[1]: Reloading.
Oct  8 05:54:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:41 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:41 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:54:41 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:54:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:41.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:41 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec001ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:41.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:41 np0005475493 systemd[1]: Starting ovn_controller container...
Oct  8 05:54:41 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:54:41 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56be1da2d7b5a9f201fba1da953ea696763ec191ff50f2e7e39fa2399a7ba07a/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct  8 05:54:41 np0005475493 systemd[1]: Started /usr/bin/podman healthcheck run 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f.
Oct  8 05:54:41 np0005475493 podman[153153]: 2025-10-08 09:54:41.815130131 +0000 UTC m=+0.142160076 container init 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, config_id=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:54:41 np0005475493 ovn_controller[153187]: + sudo -E kolla_set_configs
Oct  8 05:54:41 np0005475493 podman[153153]: 2025-10-08 09:54:41.845789413 +0000 UTC m=+0.172819318 container start 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  8 05:54:41 np0005475493 edpm-start-podman-container[153153]: ovn_controller
Oct  8 05:54:41 np0005475493 systemd[1]: Created slice User Slice of UID 0.
Oct  8 05:54:41 np0005475493 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct  8 05:54:41 np0005475493 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct  8 05:54:41 np0005475493 systemd[1]: Starting User Manager for UID 0...
Oct  8 05:54:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:41 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:41 np0005475493 edpm-start-podman-container[153146]: Creating additional drop-in dependency for "ovn_controller" (750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f)
Oct  8 05:54:41 np0005475493 podman[153194]: 2025-10-08 09:54:41.93087113 +0000 UTC m=+0.075328349 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  8 05:54:41 np0005475493 systemd[1]: 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f-61a69f9453ed5888.service: Main process exited, code=exited, status=1/FAILURE
Oct  8 05:54:41 np0005475493 systemd[1]: 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f-61a69f9453ed5888.service: Failed with result 'exit-code'.
Oct  8 05:54:41 np0005475493 systemd[1]: Reloading.
Oct  8 05:54:42 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:54:42 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:54:42 np0005475493 systemd[153219]: Queued start job for default target Main User Target.
Oct  8 05:54:42 np0005475493 systemd[153219]: Created slice User Application Slice.
Oct  8 05:54:42 np0005475493 systemd[153219]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct  8 05:54:42 np0005475493 systemd[153219]: Started Daily Cleanup of User's Temporary Directories.
Oct  8 05:54:42 np0005475493 systemd[153219]: Reached target Paths.
Oct  8 05:54:42 np0005475493 systemd[153219]: Reached target Timers.
Oct  8 05:54:42 np0005475493 systemd[153219]: Starting D-Bus User Message Bus Socket...
Oct  8 05:54:42 np0005475493 systemd[153219]: Starting Create User's Volatile Files and Directories...
Oct  8 05:54:42 np0005475493 systemd[153219]: Finished Create User's Volatile Files and Directories.
Oct  8 05:54:42 np0005475493 systemd[153219]: Listening on D-Bus User Message Bus Socket.
Oct  8 05:54:42 np0005475493 systemd[153219]: Reached target Sockets.
Oct  8 05:54:42 np0005475493 systemd[153219]: Reached target Basic System.
Oct  8 05:54:42 np0005475493 systemd[153219]: Reached target Main User Target.
Oct  8 05:54:42 np0005475493 systemd[153219]: Startup finished in 156ms.
Oct  8 05:54:42 np0005475493 systemd[1]: Started User Manager for UID 0.
Oct  8 05:54:42 np0005475493 systemd[1]: Started ovn_controller container.
Oct  8 05:54:42 np0005475493 systemd[1]: Started Session c1 of User root.
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: INFO:__main__:Validating config file
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: INFO:__main__:Writing out command to execute
Oct  8 05:54:42 np0005475493 systemd[1]: session-c1.scope: Deactivated successfully.
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: ++ cat /run_command
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: + ARGS=
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: + sudo kolla_copy_cacerts
Oct  8 05:54:42 np0005475493 systemd[1]: Started Session c2 of User root.
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: + [[ ! -n '' ]]
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: + . kolla_extend_start
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: + umask 0022
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct  8 05:54:42 np0005475493 systemd[1]: session-c2.scope: Deactivated successfully.
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct  8 05:54:42 np0005475493 NetworkManager[44872]: <info>  [1759917282.4137] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct  8 05:54:42 np0005475493 NetworkManager[44872]: <info>  [1759917282.4145] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  8 05:54:42 np0005475493 NetworkManager[44872]: <info>  [1759917282.4156] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct  8 05:54:42 np0005475493 NetworkManager[44872]: <info>  [1759917282.4162] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct  8 05:54:42 np0005475493 NetworkManager[44872]: <info>  [1759917282.4166] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  8 05:54:42 np0005475493 kernel: br-int: entered promiscuous mode
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00019|main|INFO|OVS feature set changed, force recompute.
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  8 05:54:42 np0005475493 NetworkManager[44872]: <info>  [1759917282.4328] manager: (ovn-9a0c8b-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct  8 05:54:42 np0005475493 ovn_controller[153187]: 2025-10-08T09:54:42Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  8 05:54:42 np0005475493 NetworkManager[44872]: <info>  [1759917282.4337] manager: (ovn-6f73e5-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Oct  8 05:54:42 np0005475493 systemd-udevd[153322]: Network interface NamePolicy= disabled on kernel command line.
Oct  8 05:54:42 np0005475493 kernel: genev_sys_6081: entered promiscuous mode
Oct  8 05:54:42 np0005475493 systemd-udevd[153324]: Network interface NamePolicy= disabled on kernel command line.
Oct  8 05:54:42 np0005475493 NetworkManager[44872]: <info>  [1759917282.4539] device (genev_sys_6081): carrier: link connected
Oct  8 05:54:42 np0005475493 NetworkManager[44872]: <info>  [1759917282.4542] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/21)
Oct  8 05:54:42 np0005475493 NetworkManager[44872]: <info>  [1759917282.8823] manager: (ovn-b58ac6-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Oct  8 05:54:43 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v290: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct  8 05:54:43 np0005475493 python3.9[153455]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:54:43 np0005475493 ovs-vsctl[153456]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct  8 05:54:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:43 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef800c300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:43.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:43 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:54:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:43.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:43 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:44 np0005475493 python3.9[153609]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:54:44 np0005475493 ovs-vsctl[153611]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct  8 05:54:44 np0005475493 python3.9[153764]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:54:44 np0005475493 ovs-vsctl[153765]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct  8 05:54:45 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v291: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct  8 05:54:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:45.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed4004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:45 np0005475493 systemd[1]: session-51.scope: Deactivated successfully.
Oct  8 05:54:45 np0005475493 systemd[1]: session-51.scope: Consumed 55.349s CPU time.
Oct  8 05:54:45 np0005475493 systemd-logind[798]: Session 51 logged out. Waiting for processes to exit.
Oct  8 05:54:45 np0005475493 systemd-logind[798]: Removed session 51.
Oct  8 05:54:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:45.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:54:45] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  8 05:54:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:54:45] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  8 05:54:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:45 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9eec001ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:46.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v292: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct  8 05:54:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:47 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:47.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:47 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:54:47
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['backups', '.mgr', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'images', 'default.rgw.meta', '.nfs', 'cephfs.cephfs.data', 'default.rgw.log']
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 05:54:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:47.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 05:54:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:54:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:54:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:47 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:54:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:54:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 05:54:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:54:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:54:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:54:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:54:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:54:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:54:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:54:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:54:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:54:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:48.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:54:49 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v293: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct  8 05:54:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:49 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:54:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:49.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:54:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:49 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:49.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:49 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:51 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v294: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:54:51 np0005475493 systemd-logind[798]: New session 53 of user zuul.
Oct  8 05:54:51 np0005475493 systemd[1]: Started Session 53 of User zuul.
Oct  8 05:54:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef800c300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:51.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:51.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:51 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:52 np0005475493 python3.9[153953]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:54:52 np0005475493 systemd[1]: Stopping User Manager for UID 0...
Oct  8 05:54:52 np0005475493 systemd[153219]: Activating special unit Exit the Session...
Oct  8 05:54:52 np0005475493 systemd[153219]: Stopped target Main User Target.
Oct  8 05:54:52 np0005475493 systemd[153219]: Stopped target Basic System.
Oct  8 05:54:52 np0005475493 systemd[153219]: Stopped target Paths.
Oct  8 05:54:52 np0005475493 systemd[153219]: Stopped target Sockets.
Oct  8 05:54:52 np0005475493 systemd[153219]: Stopped target Timers.
Oct  8 05:54:52 np0005475493 systemd[153219]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  8 05:54:52 np0005475493 systemd[153219]: Closed D-Bus User Message Bus Socket.
Oct  8 05:54:52 np0005475493 systemd[153219]: Stopped Create User's Volatile Files and Directories.
Oct  8 05:54:52 np0005475493 systemd[153219]: Removed slice User Application Slice.
Oct  8 05:54:52 np0005475493 systemd[153219]: Reached target Shutdown.
Oct  8 05:54:52 np0005475493 systemd[153219]: Finished Exit the Session.
Oct  8 05:54:52 np0005475493 systemd[153219]: Reached target Exit the Session.
Oct  8 05:54:52 np0005475493 systemd[1]: user@0.service: Deactivated successfully.
Oct  8 05:54:52 np0005475493 systemd[1]: Stopped User Manager for UID 0.
Oct  8 05:54:52 np0005475493 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct  8 05:54:52 np0005475493 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct  8 05:54:52 np0005475493 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct  8 05:54:52 np0005475493 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct  8 05:54:52 np0005475493 systemd[1]: Removed slice User Slice of UID 0.
Oct  8 05:54:53 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v295: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:54:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:53 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:53.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:53 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef800c300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:54:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:53.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:53 np0005475493 python3.9[154112]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:54:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:53 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:54 np0005475493 python3.9[154265]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:54:55 np0005475493 python3.9[154417]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:54:55 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v296: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct  8 05:54:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:55 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0041c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:55.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:55 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ed8001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:55.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:54:55] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Oct  8 05:54:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:54:55] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Oct  8 05:54:55 np0005475493 python3.9[154570]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:54:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:55 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ef800c300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:54:56 np0005475493 python3.9[154723]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:54:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:56.989Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:54:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:56.990Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:54:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:56.991Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:54:57 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v297: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:54:57 np0005475493 python3.9[154874]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:54:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[119746]: 08/10/2025 09:54:57 : epoch 68e633f5 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8003240 fd 39 proxy ignored for local
Oct  8 05:54:57 np0005475493 kernel: ganesha.nfsd[153795]: segfault at 50 ip 00007f9fb1e1132e sp 00007f9f6f7fd210 error 4 in libntirpc.so.5.8[7f9fb1df6000+2c000] likely on CPU 4 (core 0, socket 4)
Oct  8 05:54:57 np0005475493 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct  8 05:54:57 np0005475493 systemd[1]: Started Process Core Dump (PID 154875/UID 0).
Oct  8 05:54:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:57.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:54:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:57.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:54:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:54:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:54:58.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:54:59 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v298: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct  8 05:54:59 np0005475493 python3.9[155030]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct  8 05:54:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:54:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:54:59.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:54:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:54:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:54:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:54:59.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:54:59 np0005475493 systemd-coredump[154876]: Process 119750 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 72:#012#0  0x00007f9fb1e1132e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct  8 05:54:59 np0005475493 systemd[1]: systemd-coredump@2-154875-0.service: Deactivated successfully.
Oct  8 05:54:59 np0005475493 systemd[1]: systemd-coredump@2-154875-0.service: Consumed 1.116s CPU time.
Oct  8 05:54:59 np0005475493 podman[155036]: 2025-10-08 09:54:59.913091322 +0000 UTC m=+0.023738184 container died 5648b6991b3670625e89da113426ec69b90cf4710ec8879fe91ecbad4e23ac94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  8 05:54:59 np0005475493 systemd[1]: var-lib-containers-storage-overlay-db3a225b971325494d9fd29d607fb50df99f9768861ae0ade871ec413a763e24-merged.mount: Deactivated successfully.
Oct  8 05:54:59 np0005475493 podman[155036]: 2025-10-08 09:54:59.963142828 +0000 UTC m=+0.073789670 container remove 5648b6991b3670625e89da113426ec69b90cf4710ec8879fe91ecbad4e23ac94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:54:59 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Main process exited, code=exited, status=139/n/a
Oct  8 05:55:00 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Failed with result 'exit-code'.
Oct  8 05:55:00 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.737s CPU time.
Oct  8 05:55:01 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v299: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:55:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:01.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:01.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:55:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:55:03 np0005475493 python3.9[155256]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:55:03 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v300: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:55:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:03.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:55:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:55:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:03.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:55:03 np0005475493 python3.9[155378]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917302.4005055-218-163373868727011/.source follow=False _original_basename=haproxy.j2 checksum=4bca74f6ee0b6450624d22997e2f90c414d58b44 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:55:04 np0005475493 python3.9[155529]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:55:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:55:04 np0005475493 python3.9[155716]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917303.8786292-263-113786529512115/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:55:04 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:55:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:55:05 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:55:05 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v301: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct  8 05:55:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:05.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095505 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 05:55:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:05.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:55:05] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Oct  8 05:55:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:55:05] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Oct  8 05:55:05 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:55:05 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:55:05 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 05:55:05 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:55:05 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v302: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 192 B/s rd, 0 op/s
Oct  8 05:55:05 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 05:55:06 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:55:06 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 05:55:06 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:55:06 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:55:06 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:55:06 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:55:06 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 05:55:06 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 05:55:06 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 05:55:06 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:55:06 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:55:06 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:55:06 np0005475493 podman[155926]: 2025-10-08 09:55:06.73686239 +0000 UTC m=+0.022580796 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:55:06 np0005475493 podman[155926]: 2025-10-08 09:55:06.977695623 +0000 UTC m=+0.263414039 container create 772e192ee02c40b4c7aa4a6d442633b4e0e732a8688e22babff9017a5efece23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_moore, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  8 05:55:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:06.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:55:07 np0005475493 systemd[1]: Started libpod-conmon-772e192ee02c40b4c7aa4a6d442633b4e0e732a8688e22babff9017a5efece23.scope.
Oct  8 05:55:07 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:55:07 np0005475493 podman[155926]: 2025-10-08 09:55:07.086252283 +0000 UTC m=+0.371970709 container init 772e192ee02c40b4c7aa4a6d442633b4e0e732a8688e22babff9017a5efece23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_moore, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  8 05:55:07 np0005475493 podman[155926]: 2025-10-08 09:55:07.097783678 +0000 UTC m=+0.383502064 container start 772e192ee02c40b4c7aa4a6d442633b4e0e732a8688e22babff9017a5efece23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  8 05:55:07 np0005475493 podman[155926]: 2025-10-08 09:55:07.101894339 +0000 UTC m=+0.387612805 container attach 772e192ee02c40b4c7aa4a6d442633b4e0e732a8688e22babff9017a5efece23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_moore, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  8 05:55:07 np0005475493 crazy_moore[155995]: 167 167
Oct  8 05:55:07 np0005475493 systemd[1]: libpod-772e192ee02c40b4c7aa4a6d442633b4e0e732a8688e22babff9017a5efece23.scope: Deactivated successfully.
Oct  8 05:55:07 np0005475493 podman[155926]: 2025-10-08 09:55:07.106405562 +0000 UTC m=+0.392123978 container died 772e192ee02c40b4c7aa4a6d442633b4e0e732a8688e22babff9017a5efece23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_moore, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct  8 05:55:07 np0005475493 systemd[1]: var-lib-containers-storage-overlay-c5275d6fc693e3a690756e628229236925fe6c42fc89616b9ee0c943c03f096d-merged.mount: Deactivated successfully.
Oct  8 05:55:07 np0005475493 podman[155926]: 2025-10-08 09:55:07.150807069 +0000 UTC m=+0.436525445 container remove 772e192ee02c40b4c7aa4a6d442633b4e0e732a8688e22babff9017a5efece23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_moore, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  8 05:55:07 np0005475493 systemd[1]: libpod-conmon-772e192ee02c40b4c7aa4a6d442633b4e0e732a8688e22babff9017a5efece23.scope: Deactivated successfully.
Oct  8 05:55:07 np0005475493 python3.9[155991]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  8 05:55:07 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:55:07 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:55:07 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:55:07 np0005475493 podman[156029]: 2025-10-08 09:55:07.312256456 +0000 UTC m=+0.047639821 container create 936d62689babdadab649675d70bc95f029d7bbdcd9df0e3e6b6916d947c06c32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gould, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  8 05:55:07 np0005475493 systemd[1]: Started libpod-conmon-936d62689babdadab649675d70bc95f029d7bbdcd9df0e3e6b6916d947c06c32.scope.
Oct  8 05:55:07 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:55:07 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e4da14cfb55722ee00547fa1731caaa72c2abde0f63715cc2d8231fb41bf2c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:55:07 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e4da14cfb55722ee00547fa1731caaa72c2abde0f63715cc2d8231fb41bf2c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:55:07 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e4da14cfb55722ee00547fa1731caaa72c2abde0f63715cc2d8231fb41bf2c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:55:07 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e4da14cfb55722ee00547fa1731caaa72c2abde0f63715cc2d8231fb41bf2c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:55:07 np0005475493 podman[156029]: 2025-10-08 09:55:07.292375466 +0000 UTC m=+0.027758841 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:55:07 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e4da14cfb55722ee00547fa1731caaa72c2abde0f63715cc2d8231fb41bf2c6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:55:07 np0005475493 podman[156029]: 2025-10-08 09:55:07.400500332 +0000 UTC m=+0.135883687 container init 936d62689babdadab649675d70bc95f029d7bbdcd9df0e3e6b6916d947c06c32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gould, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:55:07 np0005475493 podman[156029]: 2025-10-08 09:55:07.408348531 +0000 UTC m=+0.143731886 container start 936d62689babdadab649675d70bc95f029d7bbdcd9df0e3e6b6916d947c06c32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gould, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  8 05:55:07 np0005475493 podman[156029]: 2025-10-08 09:55:07.412190963 +0000 UTC m=+0.147574308 container attach 936d62689babdadab649675d70bc95f029d7bbdcd9df0e3e6b6916d947c06c32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gould, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct  8 05:55:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:07.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:07.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:07 np0005475493 brave_gould[156045]: --> passed data devices: 0 physical, 1 LVM
Oct  8 05:55:07 np0005475493 brave_gould[156045]: --> All data devices are unavailable
Oct  8 05:55:07 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v303: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 192 B/s rd, 0 op/s
Oct  8 05:55:07 np0005475493 systemd[1]: libpod-936d62689babdadab649675d70bc95f029d7bbdcd9df0e3e6b6916d947c06c32.scope: Deactivated successfully.
Oct  8 05:55:07 np0005475493 podman[156029]: 2025-10-08 09:55:07.792219687 +0000 UTC m=+0.527603032 container died 936d62689babdadab649675d70bc95f029d7bbdcd9df0e3e6b6916d947c06c32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gould, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct  8 05:55:07 np0005475493 systemd[1]: var-lib-containers-storage-overlay-3e4da14cfb55722ee00547fa1731caaa72c2abde0f63715cc2d8231fb41bf2c6-merged.mount: Deactivated successfully.
Oct  8 05:55:07 np0005475493 podman[156029]: 2025-10-08 09:55:07.837884724 +0000 UTC m=+0.573268059 container remove 936d62689babdadab649675d70bc95f029d7bbdcd9df0e3e6b6916d947c06c32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct  8 05:55:07 np0005475493 systemd[1]: libpod-conmon-936d62689babdadab649675d70bc95f029d7bbdcd9df0e3e6b6916d947c06c32.scope: Deactivated successfully.
Oct  8 05:55:07 np0005475493 python3.9[156134]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  8 05:55:08 np0005475493 podman[156238]: 2025-10-08 09:55:08.312140755 +0000 UTC m=+0.035608530 container create ae9c871f858890ffaf3ec6f5606c6522a099cc613e58b11a9a8c981c065cef05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:55:08 np0005475493 systemd[1]: Started libpod-conmon-ae9c871f858890ffaf3ec6f5606c6522a099cc613e58b11a9a8c981c065cef05.scope.
Oct  8 05:55:08 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:55:08 np0005475493 podman[156238]: 2025-10-08 09:55:08.38110557 +0000 UTC m=+0.104573365 container init ae9c871f858890ffaf3ec6f5606c6522a099cc613e58b11a9a8c981c065cef05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_sammet, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:55:08 np0005475493 podman[156238]: 2025-10-08 09:55:08.387200813 +0000 UTC m=+0.110668588 container start ae9c871f858890ffaf3ec6f5606c6522a099cc613e58b11a9a8c981c065cef05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_sammet, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct  8 05:55:08 np0005475493 musing_sammet[156254]: 167 167
Oct  8 05:55:08 np0005475493 systemd[1]: libpod-ae9c871f858890ffaf3ec6f5606c6522a099cc613e58b11a9a8c981c065cef05.scope: Deactivated successfully.
Oct  8 05:55:08 np0005475493 podman[156238]: 2025-10-08 09:55:08.297601344 +0000 UTC m=+0.021069139 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:55:08 np0005475493 podman[156238]: 2025-10-08 09:55:08.443002422 +0000 UTC m=+0.166470197 container attach ae9c871f858890ffaf3ec6f5606c6522a099cc613e58b11a9a8c981c065cef05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_sammet, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  8 05:55:08 np0005475493 podman[156238]: 2025-10-08 09:55:08.443639382 +0000 UTC m=+0.167107167 container died ae9c871f858890ffaf3ec6f5606c6522a099cc613e58b11a9a8c981c065cef05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  8 05:55:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:55:08 np0005475493 systemd[1]: var-lib-containers-storage-overlay-dc6939af4df8ad3e2b0428a6a0984091b7111e45887a22ec4ae8472293ad9332-merged.mount: Deactivated successfully.
Oct  8 05:55:08 np0005475493 podman[156238]: 2025-10-08 09:55:08.556461557 +0000 UTC m=+0.279929332 container remove ae9c871f858890ffaf3ec6f5606c6522a099cc613e58b11a9a8c981c065cef05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_sammet, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:55:08 np0005475493 systemd[1]: libpod-conmon-ae9c871f858890ffaf3ec6f5606c6522a099cc613e58b11a9a8c981c065cef05.scope: Deactivated successfully.
Oct  8 05:55:08 np0005475493 podman[156278]: 2025-10-08 09:55:08.723696908 +0000 UTC m=+0.039289156 container create 717ed0e0e820daac3c13327327d8e087749b78a4e2ef2ac033dacf972685b386 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct  8 05:55:08 np0005475493 systemd[1]: Started libpod-conmon-717ed0e0e820daac3c13327327d8e087749b78a4e2ef2ac033dacf972685b386.scope.
Oct  8 05:55:08 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:55:08 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38fa597c088b022fa10193c9ecd515fb2c03c6a28e9e65f08c7adff8ac9aebf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:55:08 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38fa597c088b022fa10193c9ecd515fb2c03c6a28e9e65f08c7adff8ac9aebf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:55:08 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38fa597c088b022fa10193c9ecd515fb2c03c6a28e9e65f08c7adff8ac9aebf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:55:08 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38fa597c088b022fa10193c9ecd515fb2c03c6a28e9e65f08c7adff8ac9aebf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:55:08 np0005475493 podman[156278]: 2025-10-08 09:55:08.804372514 +0000 UTC m=+0.119964792 container init 717ed0e0e820daac3c13327327d8e087749b78a4e2ef2ac033dacf972685b386 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_lewin, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:55:08 np0005475493 podman[156278]: 2025-10-08 09:55:08.707928657 +0000 UTC m=+0.023520925 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:55:08 np0005475493 podman[156278]: 2025-10-08 09:55:08.814106692 +0000 UTC m=+0.129698940 container start 717ed0e0e820daac3c13327327d8e087749b78a4e2ef2ac033dacf972685b386 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:55:08 np0005475493 podman[156278]: 2025-10-08 09:55:08.817463399 +0000 UTC m=+0.133055647 container attach 717ed0e0e820daac3c13327327d8e087749b78a4e2ef2ac033dacf972685b386 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct  8 05:55:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:08.866Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:55:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:08.868Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:55:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:08.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]: {
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:    "1": [
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:        {
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:            "devices": [
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:                "/dev/loop3"
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:            ],
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:            "lv_name": "ceph_lv0",
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:            "lv_size": "21470642176",
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:            "name": "ceph_lv0",
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:            "tags": {
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:                "ceph.cephx_lockbox_secret": "",
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:                "ceph.cluster_name": "ceph",
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:                "ceph.crush_device_class": "",
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:                "ceph.encrypted": "0",
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:                "ceph.osd_id": "1",
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:                "ceph.type": "block",
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:                "ceph.vdo": "0",
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:                "ceph.with_tpm": "0"
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:            },
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:            "type": "block",
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:            "vg_name": "ceph_vg0"
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:        }
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]:    ]
Oct  8 05:55:09 np0005475493 jolly_lewin[156294]: }
Oct  8 05:55:09 np0005475493 systemd[1]: libpod-717ed0e0e820daac3c13327327d8e087749b78a4e2ef2ac033dacf972685b386.scope: Deactivated successfully.
Oct  8 05:55:09 np0005475493 podman[156278]: 2025-10-08 09:55:09.080164104 +0000 UTC m=+0.395756352 container died 717ed0e0e820daac3c13327327d8e087749b78a4e2ef2ac033dacf972685b386 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  8 05:55:09 np0005475493 systemd[1]: var-lib-containers-storage-overlay-c38fa597c088b022fa10193c9ecd515fb2c03c6a28e9e65f08c7adff8ac9aebf-merged.mount: Deactivated successfully.
Oct  8 05:55:09 np0005475493 podman[156278]: 2025-10-08 09:55:09.126263256 +0000 UTC m=+0.441855504 container remove 717ed0e0e820daac3c13327327d8e087749b78a4e2ef2ac033dacf972685b386 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  8 05:55:09 np0005475493 systemd[1]: libpod-conmon-717ed0e0e820daac3c13327327d8e087749b78a4e2ef2ac033dacf972685b386.scope: Deactivated successfully.
Oct  8 05:55:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:09.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:55:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:09.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:55:09 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v304: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 96 B/s rd, 0 op/s
Oct  8 05:55:09 np0005475493 podman[156483]: 2025-10-08 09:55:09.774113937 +0000 UTC m=+0.043719956 container create dba9e23d49984b7dbdb63ae7730272e7bf0f7e27dbe3672aeeaf7650d104232b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:55:09 np0005475493 systemd[1]: Started libpod-conmon-dba9e23d49984b7dbdb63ae7730272e7bf0f7e27dbe3672aeeaf7650d104232b.scope.
Oct  8 05:55:09 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:55:09 np0005475493 podman[156483]: 2025-10-08 09:55:09.757717798 +0000 UTC m=+0.027323837 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:55:09 np0005475493 podman[156483]: 2025-10-08 09:55:09.901715592 +0000 UTC m=+0.171321631 container init dba9e23d49984b7dbdb63ae7730272e7bf0f7e27dbe3672aeeaf7650d104232b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_meninsky, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  8 05:55:09 np0005475493 podman[156483]: 2025-10-08 09:55:09.910604343 +0000 UTC m=+0.180210362 container start dba9e23d49984b7dbdb63ae7730272e7bf0f7e27dbe3672aeeaf7650d104232b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_meninsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  8 05:55:09 np0005475493 wizardly_meninsky[156500]: 167 167
Oct  8 05:55:09 np0005475493 systemd[1]: libpod-dba9e23d49984b7dbdb63ae7730272e7bf0f7e27dbe3672aeeaf7650d104232b.scope: Deactivated successfully.
Oct  8 05:55:09 np0005475493 podman[156483]: 2025-10-08 09:55:09.929563534 +0000 UTC m=+0.199169573 container attach dba9e23d49984b7dbdb63ae7730272e7bf0f7e27dbe3672aeeaf7650d104232b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  8 05:55:09 np0005475493 podman[156483]: 2025-10-08 09:55:09.929985098 +0000 UTC m=+0.199591127 container died dba9e23d49984b7dbdb63ae7730272e7bf0f7e27dbe3672aeeaf7650d104232b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:55:10 np0005475493 systemd[1]: var-lib-containers-storage-overlay-7b253cdaeff5c92458f6635e451cea3b8d934f9e6f76d36e177933cb5b29e1d3-merged.mount: Deactivated successfully.
Oct  8 05:55:10 np0005475493 podman[156483]: 2025-10-08 09:55:10.051697154 +0000 UTC m=+0.321303163 container remove dba9e23d49984b7dbdb63ae7730272e7bf0f7e27dbe3672aeeaf7650d104232b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_meninsky, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Oct  8 05:55:10 np0005475493 systemd[1]: libpod-conmon-dba9e23d49984b7dbdb63ae7730272e7bf0f7e27dbe3672aeeaf7650d104232b.scope: Deactivated successfully.
Oct  8 05:55:10 np0005475493 podman[156600]: 2025-10-08 09:55:10.233965281 +0000 UTC m=+0.041699873 container create f5c787b3121de947449f36ea83f210de2f6f321e9cb4d10eac3704f1d41ec2c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_franklin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:55:10 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Scheduled restart job, restart counter is at 3.
Oct  8 05:55:10 np0005475493 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:55:10 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.737s CPU time.
Oct  8 05:55:10 np0005475493 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:55:10 np0005475493 systemd[1]: Started libpod-conmon-f5c787b3121de947449f36ea83f210de2f6f321e9cb4d10eac3704f1d41ec2c8.scope.
Oct  8 05:55:10 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:55:10 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f8308d47fdc2cd9c11e3f9ba9404fef36db2f6b2b154d7cf163c84c343535cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:55:10 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f8308d47fdc2cd9c11e3f9ba9404fef36db2f6b2b154d7cf163c84c343535cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:55:10 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f8308d47fdc2cd9c11e3f9ba9404fef36db2f6b2b154d7cf163c84c343535cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:55:10 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f8308d47fdc2cd9c11e3f9ba9404fef36db2f6b2b154d7cf163c84c343535cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:55:10 np0005475493 podman[156600]: 2025-10-08 09:55:10.294547241 +0000 UTC m=+0.102281883 container init f5c787b3121de947449f36ea83f210de2f6f321e9cb4d10eac3704f1d41ec2c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:55:10 np0005475493 podman[156600]: 2025-10-08 09:55:10.305780807 +0000 UTC m=+0.113515399 container start f5c787b3121de947449f36ea83f210de2f6f321e9cb4d10eac3704f1d41ec2c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_franklin, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:55:10 np0005475493 podman[156600]: 2025-10-08 09:55:10.215244848 +0000 UTC m=+0.022979470 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:55:10 np0005475493 podman[156600]: 2025-10-08 09:55:10.309076471 +0000 UTC m=+0.116811113 container attach f5c787b3121de947449f36ea83f210de2f6f321e9cb4d10eac3704f1d41ec2c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_franklin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:55:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095510 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 05:55:10 np0005475493 python3.9[156592]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  8 05:55:10 np0005475493 podman[156664]: 2025-10-08 09:55:10.465668544 +0000 UTC m=+0.040326618 container create c427e6c11e062f9636a45bc767e0a3cb951225f9153c622ac9e9b72d859be25e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  8 05:55:10 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6201882d2556a974402ebedf55dc29af345432908f34a2728ce3c7ef9e499676/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  8 05:55:10 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6201882d2556a974402ebedf55dc29af345432908f34a2728ce3c7ef9e499676/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:55:10 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6201882d2556a974402ebedf55dc29af345432908f34a2728ce3c7ef9e499676/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:55:10 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6201882d2556a974402ebedf55dc29af345432908f34a2728ce3c7ef9e499676/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.uynkmx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:55:10 np0005475493 podman[156664]: 2025-10-08 09:55:10.539650179 +0000 UTC m=+0.114308303 container init c427e6c11e062f9636a45bc767e0a3cb951225f9153c622ac9e9b72d859be25e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:55:10 np0005475493 podman[156664]: 2025-10-08 09:55:10.447694145 +0000 UTC m=+0.022352249 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:55:10 np0005475493 podman[156664]: 2025-10-08 09:55:10.547696404 +0000 UTC m=+0.122354498 container start c427e6c11e062f9636a45bc767e0a3cb951225f9153c622ac9e9b72d859be25e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:55:10 np0005475493 bash[156664]: c427e6c11e062f9636a45bc767e0a3cb951225f9153c622ac9e9b72d859be25e
Oct  8 05:55:10 np0005475493 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:55:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:10 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  8 05:55:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:10 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  8 05:55:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:10 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  8 05:55:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:10 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  8 05:55:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:10 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  8 05:55:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:10 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  8 05:55:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:10 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  8 05:55:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:10 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:55:10 np0005475493 lvm[156942]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 05:55:10 np0005475493 lvm[156942]: VG ceph_vg0 finished
Oct  8 05:55:11 np0005475493 frosty_franklin[156617]: {}
Oct  8 05:55:11 np0005475493 systemd[1]: libpod-f5c787b3121de947449f36ea83f210de2f6f321e9cb4d10eac3704f1d41ec2c8.scope: Deactivated successfully.
Oct  8 05:55:11 np0005475493 systemd[1]: libpod-f5c787b3121de947449f36ea83f210de2f6f321e9cb4d10eac3704f1d41ec2c8.scope: Consumed 1.095s CPU time.
Oct  8 05:55:11 np0005475493 podman[156600]: 2025-10-08 09:55:11.04389285 +0000 UTC m=+0.851627452 container died f5c787b3121de947449f36ea83f210de2f6f321e9cb4d10eac3704f1d41ec2c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_franklin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  8 05:55:11 np0005475493 systemd[1]: var-lib-containers-storage-overlay-8f8308d47fdc2cd9c11e3f9ba9404fef36db2f6b2b154d7cf163c84c343535cc-merged.mount: Deactivated successfully.
Oct  8 05:55:11 np0005475493 podman[156600]: 2025-10-08 09:55:11.118537245 +0000 UTC m=+0.926271867 container remove f5c787b3121de947449f36ea83f210de2f6f321e9cb4d10eac3704f1d41ec2c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_franklin, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct  8 05:55:11 np0005475493 systemd[1]: libpod-conmon-f5c787b3121de947449f36ea83f210de2f6f321e9cb4d10eac3704f1d41ec2c8.scope: Deactivated successfully.
Oct  8 05:55:11 np0005475493 python3.9[156940]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:55:11 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:55:11 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:55:11 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:55:11 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:55:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:11.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:11.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:11 np0005475493 python3.9[157104]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917310.6818042-374-120256607137196/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:55:11 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v305: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 96 B/s rd, 0 op/s
Oct  8 05:55:12 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:55:12 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:55:12 np0005475493 ovn_controller[153187]: 2025-10-08T09:55:12Z|00025|memory|INFO|16512 kB peak resident set size after 29.9 seconds
Oct  8 05:55:12 np0005475493 ovn_controller[153187]: 2025-10-08T09:55:12Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Oct  8 05:55:12 np0005475493 podman[157229]: 2025-10-08 09:55:12.274427277 +0000 UTC m=+0.128708280 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  8 05:55:12 np0005475493 python3.9[157268]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:55:12 np0005475493 python3.9[157402]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917311.8980722-374-133798103445745/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:55:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:13.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:55:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:55:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:13.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:55:13 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v306: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 771 B/s rd, 289 B/s wr, 1 op/s
Oct  8 05:55:14 np0005475493 python3.9[157554]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:55:14 np0005475493 python3.9[157675]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917313.8536303-506-125205628279069/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:55:15 np0005475493 python3.9[157826]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:55:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:15.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:55:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:15.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:55:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:55:15] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Oct  8 05:55:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:55:15] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Oct  8 05:55:15 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v307: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 675 B/s rd, 289 B/s wr, 1 op/s
Oct  8 05:55:16 np0005475493 python3.9[157948]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917315.063814-506-251721765152198/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:55:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:16 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 05:55:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:16 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:55:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:16 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 05:55:16 np0005475493 python3.9[158098]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:55:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:16 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:55:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:16 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 05:55:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:16 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:55:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:16.994Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:55:17 np0005475493 python3.9[158253]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:55:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:55:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:17.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:55:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:55:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:17.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:55:17 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v308: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 255 B/s wr, 0 op/s
Oct  8 05:55:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:55:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:55:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:55:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:55:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:55:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:55:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:55:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:55:18 np0005475493 python3.9[158406]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:55:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:55:18 np0005475493 python3.9[158484]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:55:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:18.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:55:19 np0005475493 python3.9[158637]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:55:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:19.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:19.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:19 np0005475493 python3.9[158715]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:55:19 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v309: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Oct  8 05:55:20 np0005475493 python3.9[158868]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:55:21 np0005475493 python3.9[159020]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:55:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:21.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:21 np0005475493 python3.9[159099]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:55:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:21.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:21 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v310: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Oct  8 05:55:22 np0005475493 python3.9[159277]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:55:22 np0005475493 python3.9[159355]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  8 05:55:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:55:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:23 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:23 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58001970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:55:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:23.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:55:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:55:23 np0005475493 python3.9[159520]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:55:23 np0005475493 systemd[1]: Reloading.
Oct  8 05:55:23 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:55:23 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:55:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:23.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:23 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v311: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Oct  8 05:55:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:23 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:24 np0005475493 python3.9[159714]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:55:25 np0005475493 python3.9[159792]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:55:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:25 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:25 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095525 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 05:55:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:25.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:25.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:55:25] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Oct  8 05:55:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:55:25] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Oct  8 05:55:25 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v312: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Oct  8 05:55:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:25 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 05:55:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:25 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:55:25 np0005475493 python3.9[159945]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:55:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:25 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:26 np0005475493 python3.9[160024]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:55:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:26.995Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:55:27 np0005475493 python3.9[160176]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:55:27 np0005475493 systemd[1]: Reloading.
Oct  8 05:55:27 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:55:27 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:55:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:27 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a400016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:27 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 05:55:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:27.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 05:55:27 np0005475493 systemd[1]: Starting Create netns directory...
Oct  8 05:55:27 np0005475493 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  8 05:55:27 np0005475493 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  8 05:55:27 np0005475493 systemd[1]: Finished Create netns directory.
Oct  8 05:55:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:27.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:27 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v313: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Oct  8 05:55:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:27 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a640025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:55:28 np0005475493 python3.9[160372]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:55:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:28.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:55:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:28 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 05:55:29 np0005475493 python3.9[160525]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:55:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:29 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:29 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a400016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:55:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:29.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:55:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:55:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:29.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:55:29 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v314: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Oct  8 05:55:29 np0005475493 python3.9[160648]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917328.8205082-959-109578901911704/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:55:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:29 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:30 np0005475493 python3.9[160801]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:55:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:31 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a640025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:31 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:55:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:31.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:55:31 np0005475493 python3.9[160954]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:55:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:31.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:31 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v315: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 05:55:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:31 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a400016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:32 np0005475493 python3.9[161078]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917331.115463-1034-239505006791185/.source.json _original_basename=.n1kz9a98 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:55:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095532 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 05:55:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:55:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:55:33 np0005475493 python3.9[161230]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:55:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:33 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:33 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a640032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:55:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:55:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:33.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:55:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:33.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:33 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v316: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 05:55:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:33 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:35 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:35 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:35 np0005475493 python3.9[161660]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct  8 05:55:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:55:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:35.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:55:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:35.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:55:35] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Oct  8 05:55:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:55:35] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Oct  8 05:55:35 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v317: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct  8 05:55:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:35 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a640032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:36 np0005475493 python3.9[161813]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  8 05:55:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:36.996Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:55:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:36.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:55:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:37 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:37 np0005475493 python3.9[161966]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  8 05:55:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:37 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:37.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:37.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:37 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v318: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct  8 05:55:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:37 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:55:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:38.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:55:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:39 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:39 np0005475493 python3[162146]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  8 05:55:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:39 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:39.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:39.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:39 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v319: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct  8 05:55:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:39 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:41 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:41 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:41.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:55:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:41.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:55:41 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v320: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct  8 05:55:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:41 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:43 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:55:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:43 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:43.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:55:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:43.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:55:43 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v321: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct  8 05:55:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:43 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:45 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:45 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:45.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:55:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:45.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:55:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:55:45] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Oct  8 05:55:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:55:45] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Oct  8 05:55:45 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v322: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:55:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:45 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:46 np0005475493 podman[162248]: 2025-10-08 09:55:46.380903152 +0000 UTC m=+3.531715483 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Oct  8 05:55:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:46.998Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:55:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:46.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:55:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:47 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:47 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:47.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:55:47
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'default.rgw.control', '.nfs', 'default.rgw.log', 'cephfs.cephfs.data', 'images', '.mgr', 'vms', 'volumes', 'cephfs.cephfs.meta', 'backups']
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 05:55:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:47.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:47 np0005475493 podman[162160]: 2025-10-08 09:55:47.769741133 +0000 UTC m=+8.220098988 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v323: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 05:55:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:55:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:55:47 np0005475493 podman[162347]: 2025-10-08 09:55:47.917295587 +0000 UTC m=+0.048422563 container create 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  8 05:55:47 np0005475493 podman[162347]: 2025-10-08 09:55:47.891071063 +0000 UTC m=+0.022198069 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  8 05:55:47 np0005475493 python3[162146]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:55:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:55:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:47 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:55:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:55:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:55:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:55:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 05:55:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:55:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:55:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:55:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:55:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:55:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:48.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:55:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095549 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 05:55:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:49 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:49 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:49.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:49.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:49 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v324: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:55:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:49 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:50 np0005475493 python3.9[162535]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:55:50 np0005475493 python3.9[162690]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:55:51 np0005475493 python3.9[162767]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 05:55:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:51 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:51 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:55:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:51.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:55:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:51.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:51 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v325: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:55:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:51 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:52 np0005475493 python3.9[162918]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759917351.382558-1298-18874755154254/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:55:52 np0005475493 python3.9[162995]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  8 05:55:52 np0005475493 systemd[1]: Reloading.
Oct  8 05:55:52 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:55:52 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:55:52 np0005475493 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct  8 05:55:52 np0005475493 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct  8 05:55:52 np0005475493 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct  8 05:55:52 np0005475493 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Oct  8 05:55:52 np0005475493 radosgw[88577]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Oct  8 05:55:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:53 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:55:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:53 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:55:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:53.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:55:53 np0005475493 python3.9[163106]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:55:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:53.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:53 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v326: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Oct  8 05:55:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:53 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:54 np0005475493 systemd[1]: Reloading.
Oct  8 05:55:54 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:55:54 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:55:54 np0005475493 systemd[1]: Starting ovn_metadata_agent container...
Oct  8 05:55:55 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:55:55 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af1fbaaea5195f62cd87d30536b3f349b4ffb866cbbd8a6f5bbbf1986b93e338/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct  8 05:55:55 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af1fbaaea5195f62cd87d30536b3f349b4ffb866cbbd8a6f5bbbf1986b93e338/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  8 05:55:55 np0005475493 systemd[1]: Started /usr/bin/podman healthcheck run 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784.
Oct  8 05:55:55 np0005475493 podman[163153]: 2025-10-08 09:55:55.114531799 +0000 UTC m=+0.161967870 container init 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: + sudo -E kolla_set_configs
Oct  8 05:55:55 np0005475493 podman[163153]: 2025-10-08 09:55:55.157001241 +0000 UTC m=+0.204437332 container start 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  8 05:55:55 np0005475493 edpm-start-podman-container[163153]: ovn_metadata_agent
Oct  8 05:55:55 np0005475493 podman[163177]: 2025-10-08 09:55:55.239705058 +0000 UTC m=+0.065843710 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  8 05:55:55 np0005475493 edpm-start-podman-container[163152]: Creating additional drop-in dependency for "ovn_metadata_agent" (96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784)
Oct  8 05:55:55 np0005475493 systemd[1]: Reloading.
Oct  8 05:55:55 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: INFO:__main__:Validating config file
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: INFO:__main__:Copying service configuration files
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: INFO:__main__:Writing out command to execute
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: INFO:__main__:Setting permission for /var/lib/neutron
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct  8 05:55:55 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: ++ cat /run_command
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: + CMD=neutron-ovn-metadata-agent
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: + ARGS=
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: + sudo kolla_copy_cacerts
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: + [[ ! -n '' ]]
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: + . kolla_extend_start
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: Running command: 'neutron-ovn-metadata-agent'
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: + umask 0022
Oct  8 05:55:55 np0005475493 ovn_metadata_agent[163169]: + exec neutron-ovn-metadata-agent
Oct  8 05:55:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:55 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:55 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:55 np0005475493 systemd[1]: Started ovn_metadata_agent container.
Oct  8 05:55:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:55:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:55.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:55:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:55:55] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Oct  8 05:55:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:55:55] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Oct  8 05:55:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:55.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:55 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v327: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Oct  8 05:55:55 np0005475493 systemd[1]: session-53.scope: Deactivated successfully.
Oct  8 05:55:55 np0005475493 systemd[1]: session-53.scope: Consumed 54.302s CPU time.
Oct  8 05:55:55 np0005475493 systemd-logind[798]: Session 53 logged out. Waiting for processes to exit.
Oct  8 05:55:55 np0005475493 systemd-logind[798]: Removed session 53.
Oct  8 05:55:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:55 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:57.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.351 163175 INFO neutron.common.config [-] Logging enabled!#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.352 163175 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.352 163175 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.353 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.353 163175 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.353 163175 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.353 163175 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.353 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.353 163175 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.354 163175 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.354 163175 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.354 163175 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.354 163175 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.354 163175 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.354 163175 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.355 163175 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.355 163175 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.355 163175 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.355 163175 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.355 163175 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.355 163175 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.355 163175 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.356 163175 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.356 163175 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.356 163175 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.356 163175 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.356 163175 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.356 163175 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.356 163175 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.357 163175 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.357 163175 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.357 163175 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.357 163175 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.357 163175 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.357 163175 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.357 163175 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.358 163175 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.358 163175 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.358 163175 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.358 163175 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.358 163175 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.358 163175 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.359 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.359 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.359 163175 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.359 163175 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.359 163175 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.359 163175 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.359 163175 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.360 163175 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.360 163175 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.360 163175 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.360 163175 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.360 163175 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.360 163175 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.360 163175 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.360 163175 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.361 163175 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.361 163175 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.361 163175 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.361 163175 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.361 163175 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.361 163175 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.362 163175 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.362 163175 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.362 163175 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.362 163175 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.362 163175 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.362 163175 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.362 163175 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.363 163175 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.363 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.363 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.363 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.363 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.363 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.364 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.364 163175 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.364 163175 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.364 163175 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.364 163175 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.364 163175 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.364 163175 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.365 163175 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.365 163175 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.365 163175 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.365 163175 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.365 163175 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.365 163175 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.366 163175 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.366 163175 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.366 163175 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.366 163175 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.366 163175 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.366 163175 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.366 163175 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.367 163175 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.367 163175 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.367 163175 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.367 163175 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.367 163175 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.367 163175 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.367 163175 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.367 163175 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.368 163175 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.368 163175 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.368 163175 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.368 163175 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.368 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.368 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.369 163175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.369 163175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.369 163175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.369 163175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.369 163175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.369 163175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.369 163175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.370 163175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.370 163175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.370 163175 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.370 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.370 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.370 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.371 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.371 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.371 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.371 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.371 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.371 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.371 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.372 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.372 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.372 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.372 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.372 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.372 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.372 163175 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.373 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.373 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.373 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.373 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.373 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.373 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.374 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.374 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.374 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.374 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.374 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.374 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.374 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.375 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.375 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.375 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.375 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.375 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.375 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.376 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.376 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.376 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.376 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.376 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.376 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.376 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.377 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.377 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.377 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.377 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.377 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.377 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.377 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.378 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.378 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.378 163175 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.378 163175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.378 163175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.378 163175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.379 163175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.379 163175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.379 163175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.379 163175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.379 163175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.379 163175 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.379 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.380 163175 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.380 163175 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.380 163175 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.380 163175 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.380 163175 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.380 163175 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.380 163175 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.381 163175 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.381 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.381 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.381 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.381 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.381 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.382 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.382 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.382 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.382 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.382 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.382 163175 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.382 163175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.382 163175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.382 163175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.383 163175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.383 163175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.383 163175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.383 163175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.383 163175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.383 163175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.383 163175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.383 163175 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.383 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.384 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.384 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.384 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.384 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.384 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.384 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.384 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.384 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.384 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.384 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.385 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.385 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.385 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.385 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.385 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.385 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.385 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.385 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.385 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.385 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.386 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.386 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.386 163175 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.386 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.386 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.386 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.386 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.386 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.386 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.386 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.387 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.387 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.387 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.387 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.387 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.387 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.387 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.387 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.387 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.388 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.388 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.388 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.388 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.388 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.388 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.388 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.388 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.388 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.388 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.389 163175 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.389 163175 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.389 163175 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.389 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.389 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.389 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.389 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.389 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.389 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.390 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.390 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.390 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.390 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.390 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.390 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.390 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.390 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.390 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.391 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.391 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.391 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.391 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.391 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.391 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.391 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.391 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.391 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.392 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.392 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.392 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.392 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.392 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.392 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.392 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.392 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.392 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.392 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.393 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.393 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.393 163175 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.393 163175 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.401 163175 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.401 163175 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.401 163175 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.401 163175 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.402 163175 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.416 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 26869918-b723-425c-a2e1-0d697f3d0fec (UUID: 26869918-b723-425c-a2e1-0d697f3d0fec) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.435 163175 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.436 163175 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.436 163175 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.436 163175 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.438 163175 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.443 163175 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.449 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '26869918-b723-425c-a2e1-0d697f3d0fec'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], external_ids={}, name=26869918-b723-425c-a2e1-0d697f3d0fec, nb_cfg_timestamp=1759917290430, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.450 163175 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f191f102f40>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.451 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.451 163175 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.451 163175 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.451 163175 INFO oslo_service.service [-] Starting 1 workers#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.456 163175 DEBUG oslo_service.service [-] Started child 163284 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Oct  8 05:55:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:57 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.459 163284 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-159970'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.459 163175 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp0cqyf9jh/privsep.sock']#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.484 163284 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.485 163284 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.485 163284 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.504 163284 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.511 163284 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct  8 05:55:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:57.520 163284 INFO eventlet.wsgi.server [-] (163284) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Oct  8 05:55:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:57 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:57.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:57.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:57 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v328: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Oct  8 05:55:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:57 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:58 np0005475493 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct  8 05:55:58 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:58.166 163175 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  8 05:55:58 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:58.166 163175 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp0cqyf9jh/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  8 05:55:58 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:58.028 163290 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  8 05:55:58 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:58.035 163290 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  8 05:55:58 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:58.039 163290 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Oct  8 05:55:58 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:58.039 163290 INFO oslo.privsep.daemon [-] privsep daemon running as pid 163290#033[00m
Oct  8 05:55:58 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:58.169 163290 DEBUG oslo.privsep.daemon [-] privsep: reply[a4580327-dafc-4d09-8781-93f599a4178e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 05:55:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:55:58 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:58.649 163290 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 05:55:58 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:58.649 163290 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 05:55:58 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:58.649 163290 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 05:55:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:55:58.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.169 163290 DEBUG oslo.privsep.daemon [-] privsep: reply[cf03aed7-3e52-42ea-b3da-151763769da7]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.171 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, column=external_ids, values=({'neutron:ovn-metadata-id': '2ded52bb-1ae7-5b18-bd1d-b28ab5fb6948'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.181 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.188 163175 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.188 163175 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.188 163175 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.188 163175 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.188 163175 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.188 163175 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.188 163175 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.189 163175 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.189 163175 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.189 163175 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.189 163175 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.189 163175 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.189 163175 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.189 163175 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.189 163175 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.189 163175 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.190 163175 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.190 163175 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.190 163175 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.190 163175 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.190 163175 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.190 163175 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.190 163175 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.190 163175 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.191 163175 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.191 163175 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.191 163175 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.191 163175 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.191 163175 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.191 163175 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.191 163175 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.191 163175 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.191 163175 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.192 163175 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.192 163175 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.192 163175 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.192 163175 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.192 163175 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.192 163175 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.192 163175 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.192 163175 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.193 163175 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.193 163175 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.193 163175 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.193 163175 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.193 163175 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.193 163175 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.193 163175 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.193 163175 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.193 163175 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.194 163175 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.194 163175 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.194 163175 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.194 163175 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.194 163175 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.194 163175 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.194 163175 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.194 163175 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.194 163175 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.194 163175 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.194 163175 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.195 163175 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.195 163175 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.195 163175 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.195 163175 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.195 163175 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.195 163175 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.195 163175 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.195 163175 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.195 163175 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.196 163175 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.196 163175 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.196 163175 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.196 163175 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.196 163175 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.196 163175 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.196 163175 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.196 163175 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.196 163175 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.197 163175 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.197 163175 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.197 163175 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.197 163175 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.197 163175 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.197 163175 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.197 163175 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.197 163175 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.197 163175 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.198 163175 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.198 163175 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.198 163175 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.198 163175 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.198 163175 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.198 163175 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.198 163175 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.198 163175 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.198 163175 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.198 163175 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.199 163175 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.199 163175 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.199 163175 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.199 163175 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.199 163175 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.199 163175 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.199 163175 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.199 163175 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.199 163175 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.199 163175 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.200 163175 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.200 163175 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.200 163175 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.200 163175 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.200 163175 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.200 163175 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.200 163175 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.200 163175 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.201 163175 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.201 163175 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.201 163175 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.201 163175 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.201 163175 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.201 163175 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.201 163175 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.201 163175 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.201 163175 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.202 163175 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.202 163175 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.202 163175 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.202 163175 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.202 163175 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.202 163175 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.202 163175 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.202 163175 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.202 163175 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.202 163175 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.203 163175 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.203 163175 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.203 163175 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.203 163175 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.203 163175 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.203 163175 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.203 163175 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.203 163175 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.203 163175 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.204 163175 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.204 163175 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.204 163175 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.204 163175 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.204 163175 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.204 163175 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.204 163175 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.204 163175 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.204 163175 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.204 163175 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.204 163175 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.205 163175 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.205 163175 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.205 163175 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.205 163175 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.205 163175 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.205 163175 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.205 163175 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.205 163175 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.205 163175 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.205 163175 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.205 163175 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.206 163175 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.206 163175 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.206 163175 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.206 163175 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.206 163175 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.206 163175 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.206 163175 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.206 163175 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.206 163175 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.206 163175 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.207 163175 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.207 163175 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.207 163175 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.207 163175 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.207 163175 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.207 163175 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.207 163175 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.207 163175 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.207 163175 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.208 163175 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.208 163175 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.208 163175 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.208 163175 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.208 163175 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.208 163175 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.208 163175 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.208 163175 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.208 163175 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.209 163175 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.209 163175 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.209 163175 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.209 163175 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.209 163175 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.209 163175 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.209 163175 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.209 163175 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.209 163175 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.209 163175 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.210 163175 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.210 163175 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.210 163175 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.210 163175 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.210 163175 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.210 163175 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.210 163175 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.210 163175 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.210 163175 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.210 163175 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.210 163175 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.211 163175 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.211 163175 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.211 163175 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.211 163175 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.211 163175 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.211 163175 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.211 163175 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.211 163175 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.211 163175 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.211 163175 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.212 163175 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.212 163175 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.212 163175 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.212 163175 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.212 163175 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.212 163175 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.212 163175 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.212 163175 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.212 163175 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.212 163175 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.213 163175 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.213 163175 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.213 163175 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.213 163175 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.213 163175 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.213 163175 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.213 163175 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.213 163175 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.213 163175 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.214 163175 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.214 163175 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.214 163175 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.214 163175 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.214 163175 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.214 163175 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.214 163175 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.214 163175 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.214 163175 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.214 163175 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.215 163175 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.215 163175 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.215 163175 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.215 163175 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.215 163175 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.215 163175 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.215 163175 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.216 163175 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.216 163175 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.217 163175 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.217 163175 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.217 163175 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.217 163175 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.217 163175 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.217 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.217 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.217 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.217 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.218 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.218 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.218 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.218 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.218 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.218 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.218 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.218 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.218 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.219 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.219 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.219 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.219 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.219 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.219 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.219 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.219 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.219 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.219 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.220 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.220 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.220 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.220 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.220 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.220 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.220 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.220 163175 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.220 163175 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.220 163175 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.221 163175 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.221 163175 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 05:55:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:55:59.221 163175 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  8 05:55:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:59 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:59 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:55:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:55:59.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:55:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:55:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:55:59.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:55:59 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v329: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 0 B/s wr, 162 op/s
Oct  8 05:55:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:55:59 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:00 np0005475493 systemd-logind[798]: New session 54 of user zuul.
Oct  8 05:56:00 np0005475493 systemd[1]: Started Session 54 of User zuul.
Oct  8 05:56:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:01 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:01 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:56:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:01.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:56:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:01.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:01 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v330: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 0 B/s wr, 162 op/s
Oct  8 05:56:01 np0005475493 python3.9[163451]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 05:56:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:01 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:02 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:56:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:56:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:56:03 np0005475493 python3.9[163633]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:56:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:03 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:56:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:03 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:03.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:56:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:03.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:56:03 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v331: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 597 B/s wr, 164 op/s
Oct  8 05:56:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:03 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:04 np0005475493 python3.9[163800]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  8 05:56:04 np0005475493 systemd[1]: Reloading.
Oct  8 05:56:04 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:56:04 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:56:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:05 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:05 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:56:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:05.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:56:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:05 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 05:56:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:05 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:56:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:05 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:56:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:56:05] "GET /metrics HTTP/1.1" 200 48353 "" "Prometheus/2.51.0"
Oct  8 05:56:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:56:05] "GET /metrics HTTP/1.1" 200 48353 "" "Prometheus/2.51.0"
Oct  8 05:56:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:05.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:05 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v332: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 597 B/s wr, 135 op/s
Oct  8 05:56:05 np0005475493 python3.9[163986]: ansible-ansible.builtin.service_facts Invoked
Oct  8 05:56:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:05 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:05 np0005475493 network[164004]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  8 05:56:06 np0005475493 network[164005]: 'network-scripts' will be removed from distribution in near future.
Oct  8 05:56:06 np0005475493 network[164006]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  8 05:56:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:07.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:56:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:07.001Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:56:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:07 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:07 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:07.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:07.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:07 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v333: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 597 B/s wr, 135 op/s
Oct  8 05:56:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:07 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:56:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:08 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 05:56:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:08.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:56:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:09 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:09 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c0095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:56:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:09.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:56:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:09.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:09 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v334: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 1023 B/s wr, 137 op/s
Oct  8 05:56:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:09 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:10 np0005475493 python3.9[164275]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:56:11 np0005475493 python3.9[164428]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:56:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:11 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:11 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:56:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:11.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:56:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:56:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:11.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:56:11 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v335: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 05:56:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:11 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c0095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:12 np0005475493 python3.9[164632]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:56:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:56:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:56:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 05:56:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:56:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v336: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct  8 05:56:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 05:56:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:56:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 05:56:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:56:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 05:56:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 05:56:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 05:56:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:56:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:56:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:56:12 np0005475493 podman[164911]: 2025-10-08 09:56:12.905093143 +0000 UTC m=+0.087824091 container create b1b40713975c0fbbc8e505941777c7f902ee4711c4bad1967c7534cccc934bad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_zhukovsky, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  8 05:56:12 np0005475493 podman[164911]: 2025-10-08 09:56:12.847600066 +0000 UTC m=+0.030331044 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:56:12 np0005475493 systemd[1]: Started libpod-conmon-b1b40713975c0fbbc8e505941777c7f902ee4711c4bad1967c7534cccc934bad.scope.
Oct  8 05:56:12 np0005475493 python3.9[164867]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:56:12 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:56:13 np0005475493 podman[164911]: 2025-10-08 09:56:13.029109313 +0000 UTC m=+0.211840301 container init b1b40713975c0fbbc8e505941777c7f902ee4711c4bad1967c7534cccc934bad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:56:13 np0005475493 podman[164911]: 2025-10-08 09:56:13.036455921 +0000 UTC m=+0.219186889 container start b1b40713975c0fbbc8e505941777c7f902ee4711c4bad1967c7534cccc934bad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct  8 05:56:13 np0005475493 keen_zhukovsky[164928]: 167 167
Oct  8 05:56:13 np0005475493 systemd[1]: libpod-b1b40713975c0fbbc8e505941777c7f902ee4711c4bad1967c7534cccc934bad.scope: Deactivated successfully.
Oct  8 05:56:13 np0005475493 podman[164911]: 2025-10-08 09:56:13.049814781 +0000 UTC m=+0.232545739 container attach b1b40713975c0fbbc8e505941777c7f902ee4711c4bad1967c7534cccc934bad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct  8 05:56:13 np0005475493 podman[164911]: 2025-10-08 09:56:13.051533749 +0000 UTC m=+0.234264727 container died b1b40713975c0fbbc8e505941777c7f902ee4711c4bad1967c7534cccc934bad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:56:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095613 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 05:56:13 np0005475493 systemd[1]: var-lib-containers-storage-overlay-a208a4dda1b6d6a1a8eaf7b6db7d309e7c2faf9c5b25faff836ba1c29c69c404-merged.mount: Deactivated successfully.
Oct  8 05:56:13 np0005475493 podman[164911]: 2025-10-08 09:56:13.130425929 +0000 UTC m=+0.313156867 container remove b1b40713975c0fbbc8e505941777c7f902ee4711c4bad1967c7534cccc934bad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_zhukovsky, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:56:13 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:56:13 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:56:13 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:56:13 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:56:13 np0005475493 systemd[1]: libpod-conmon-b1b40713975c0fbbc8e505941777c7f902ee4711c4bad1967c7534cccc934bad.scope: Deactivated successfully.
Oct  8 05:56:13 np0005475493 podman[165032]: 2025-10-08 09:56:13.303531113 +0000 UTC m=+0.048760475 container create 031d86a4dba0835495709db30b21a8883d74261e97147e3dcf7fc4e19251ddea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  8 05:56:13 np0005475493 systemd[1]: Started libpod-conmon-031d86a4dba0835495709db30b21a8883d74261e97147e3dcf7fc4e19251ddea.scope.
Oct  8 05:56:13 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:56:13 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0033da69a23924ace93ecb4159135209e866fa32575eb7b78827451f725f75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:56:13 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0033da69a23924ace93ecb4159135209e866fa32575eb7b78827451f725f75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:56:13 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0033da69a23924ace93ecb4159135209e866fa32575eb7b78827451f725f75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:56:13 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0033da69a23924ace93ecb4159135209e866fa32575eb7b78827451f725f75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:56:13 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0033da69a23924ace93ecb4159135209e866fa32575eb7b78827451f725f75/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:56:13 np0005475493 podman[165032]: 2025-10-08 09:56:13.28238968 +0000 UTC m=+0.027619072 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:56:13 np0005475493 podman[165032]: 2025-10-08 09:56:13.3856021 +0000 UTC m=+0.130831472 container init 031d86a4dba0835495709db30b21a8883d74261e97147e3dcf7fc4e19251ddea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:56:13 np0005475493 podman[165032]: 2025-10-08 09:56:13.399207158 +0000 UTC m=+0.144436520 container start 031d86a4dba0835495709db30b21a8883d74261e97147e3dcf7fc4e19251ddea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:56:13 np0005475493 podman[165032]: 2025-10-08 09:56:13.404064612 +0000 UTC m=+0.149293994 container attach 031d86a4dba0835495709db30b21a8883d74261e97147e3dcf7fc4e19251ddea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_keldysh, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:56:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:13 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:56:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:13 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:56:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:13.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:56:13 np0005475493 jolly_keldysh[165072]: --> passed data devices: 0 physical, 1 LVM
Oct  8 05:56:13 np0005475493 jolly_keldysh[165072]: --> All data devices are unavailable
Oct  8 05:56:13 np0005475493 systemd[1]: libpod-031d86a4dba0835495709db30b21a8883d74261e97147e3dcf7fc4e19251ddea.scope: Deactivated successfully.
Oct  8 05:56:13 np0005475493 podman[165032]: 2025-10-08 09:56:13.734747338 +0000 UTC m=+0.479976740 container died 031d86a4dba0835495709db30b21a8883d74261e97147e3dcf7fc4e19251ddea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_keldysh, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:56:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:13.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:13 np0005475493 systemd[1]: var-lib-containers-storage-overlay-bc0033da69a23924ace93ecb4159135209e866fa32575eb7b78827451f725f75-merged.mount: Deactivated successfully.
Oct  8 05:56:13 np0005475493 podman[165032]: 2025-10-08 09:56:13.807558832 +0000 UTC m=+0.552788194 container remove 031d86a4dba0835495709db30b21a8883d74261e97147e3dcf7fc4e19251ddea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_keldysh, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct  8 05:56:13 np0005475493 systemd[1]: libpod-conmon-031d86a4dba0835495709db30b21a8883d74261e97147e3dcf7fc4e19251ddea.scope: Deactivated successfully.
Oct  8 05:56:13 np0005475493 python3.9[165129]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:56:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:14 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v337: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 489 B/s wr, 2 op/s
Oct  8 05:56:14 np0005475493 podman[165329]: 2025-10-08 09:56:14.382264074 +0000 UTC m=+0.042727592 container create 76b2e80b3611ccd453d6f0cb615e342af8f74b91424e95453dde24f95430570c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_liskov, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct  8 05:56:14 np0005475493 systemd[1]: Started libpod-conmon-76b2e80b3611ccd453d6f0cb615e342af8f74b91424e95453dde24f95430570c.scope.
Oct  8 05:56:14 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:56:14 np0005475493 podman[165329]: 2025-10-08 09:56:14.363179289 +0000 UTC m=+0.023642817 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:56:14 np0005475493 podman[165329]: 2025-10-08 09:56:14.472734672 +0000 UTC m=+0.133198200 container init 76b2e80b3611ccd453d6f0cb615e342af8f74b91424e95453dde24f95430570c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_liskov, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:56:14 np0005475493 podman[165329]: 2025-10-08 09:56:14.483611149 +0000 UTC m=+0.144074657 container start 76b2e80b3611ccd453d6f0cb615e342af8f74b91424e95453dde24f95430570c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  8 05:56:14 np0005475493 podman[165329]: 2025-10-08 09:56:14.488002987 +0000 UTC m=+0.148466515 container attach 76b2e80b3611ccd453d6f0cb615e342af8f74b91424e95453dde24f95430570c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:56:14 np0005475493 angry_liskov[165385]: 167 167
Oct  8 05:56:14 np0005475493 podman[165329]: 2025-10-08 09:56:14.491855307 +0000 UTC m=+0.152318815 container died 76b2e80b3611ccd453d6f0cb615e342af8f74b91424e95453dde24f95430570c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Oct  8 05:56:14 np0005475493 systemd[1]: libpod-76b2e80b3611ccd453d6f0cb615e342af8f74b91424e95453dde24f95430570c.scope: Deactivated successfully.
Oct  8 05:56:14 np0005475493 systemd[1]: var-lib-containers-storage-overlay-470bbf127254c30705b06ea64f1d38a4030cd2c6186d9a2597287044c09747af-merged.mount: Deactivated successfully.
Oct  8 05:56:14 np0005475493 podman[165329]: 2025-10-08 09:56:14.528204182 +0000 UTC m=+0.188667690 container remove 76b2e80b3611ccd453d6f0cb615e342af8f74b91424e95453dde24f95430570c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  8 05:56:14 np0005475493 systemd[1]: libpod-conmon-76b2e80b3611ccd453d6f0cb615e342af8f74b91424e95453dde24f95430570c.scope: Deactivated successfully.
Oct  8 05:56:14 np0005475493 podman[165440]: 2025-10-08 09:56:14.693560195 +0000 UTC m=+0.048440693 container create 6cff789483a09f024d4907d6661f74d214bf9241b7090e7cb4188183f128eef8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  8 05:56:14 np0005475493 systemd[1]: Started libpod-conmon-6cff789483a09f024d4907d6661f74d214bf9241b7090e7cb4188183f128eef8.scope.
Oct  8 05:56:14 np0005475493 podman[165440]: 2025-10-08 09:56:14.672565928 +0000 UTC m=+0.027446436 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:56:14 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:56:14 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f0cbf986aede5ff40a61d53a3ca70373545216308831322d01e0fc4f9531cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:56:14 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f0cbf986aede5ff40a61d53a3ca70373545216308831322d01e0fc4f9531cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:56:14 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f0cbf986aede5ff40a61d53a3ca70373545216308831322d01e0fc4f9531cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:56:14 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f0cbf986aede5ff40a61d53a3ca70373545216308831322d01e0fc4f9531cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:56:14 np0005475493 podman[165440]: 2025-10-08 09:56:14.791131044 +0000 UTC m=+0.146011542 container init 6cff789483a09f024d4907d6661f74d214bf9241b7090e7cb4188183f128eef8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_jang, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:56:14 np0005475493 podman[165440]: 2025-10-08 09:56:14.800578673 +0000 UTC m=+0.155459171 container start 6cff789483a09f024d4907d6661f74d214bf9241b7090e7cb4188183f128eef8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:56:14 np0005475493 podman[165440]: 2025-10-08 09:56:14.80404353 +0000 UTC m=+0.158924048 container attach 6cff789483a09f024d4907d6661f74d214bf9241b7090e7cb4188183f128eef8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_jang, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:56:14 np0005475493 python3.9[165427]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:56:15 np0005475493 amazing_jang[165456]: {
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:    "1": [
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:        {
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:            "devices": [
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:                "/dev/loop3"
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:            ],
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:            "lv_name": "ceph_lv0",
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:            "lv_size": "21470642176",
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:            "name": "ceph_lv0",
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:            "tags": {
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:                "ceph.cephx_lockbox_secret": "",
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:                "ceph.cluster_name": "ceph",
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:                "ceph.crush_device_class": "",
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:                "ceph.encrypted": "0",
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:                "ceph.osd_id": "1",
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:                "ceph.type": "block",
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:                "ceph.vdo": "0",
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:                "ceph.with_tpm": "0"
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:            },
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:            "type": "block",
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:            "vg_name": "ceph_vg0"
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:        }
Oct  8 05:56:15 np0005475493 amazing_jang[165456]:    ]
Oct  8 05:56:15 np0005475493 amazing_jang[165456]: }
Oct  8 05:56:15 np0005475493 systemd[1]: libpod-6cff789483a09f024d4907d6661f74d214bf9241b7090e7cb4188183f128eef8.scope: Deactivated successfully.
Oct  8 05:56:15 np0005475493 podman[165440]: 2025-10-08 09:56:15.131104534 +0000 UTC m=+0.485985032 container died 6cff789483a09f024d4907d6661f74d214bf9241b7090e7cb4188183f128eef8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_jang, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:56:15 np0005475493 systemd[1]: var-lib-containers-storage-overlay-03f0cbf986aede5ff40a61d53a3ca70373545216308831322d01e0fc4f9531cd-merged.mount: Deactivated successfully.
Oct  8 05:56:15 np0005475493 podman[165440]: 2025-10-08 09:56:15.251624026 +0000 UTC m=+0.606504524 container remove 6cff789483a09f024d4907d6661f74d214bf9241b7090e7cb4188183f128eef8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_jang, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:56:15 np0005475493 systemd[1]: libpod-conmon-6cff789483a09f024d4907d6661f74d214bf9241b7090e7cb4188183f128eef8.scope: Deactivated successfully.
Oct  8 05:56:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:15 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:15 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:15.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:15 np0005475493 python3.9[165654]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 05:56:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:56:15] "GET /metrics HTTP/1.1" 200 48353 "" "Prometheus/2.51.0"
Oct  8 05:56:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:56:15] "GET /metrics HTTP/1.1" 200 48353 "" "Prometheus/2.51.0"
Oct  8 05:56:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:56:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:15.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:56:15 np0005475493 podman[165744]: 2025-10-08 09:56:15.849199238 +0000 UTC m=+0.048075211 container create e03944504efa496146b9ff11567e5150958774156b4a2a637eee582d142dca68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:56:15 np0005475493 systemd[1]: Started libpod-conmon-e03944504efa496146b9ff11567e5150958774156b4a2a637eee582d142dca68.scope.
Oct  8 05:56:15 np0005475493 podman[165744]: 2025-10-08 09:56:15.829230255 +0000 UTC m=+0.028106238 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:56:15 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:56:15 np0005475493 podman[165744]: 2025-10-08 09:56:15.958827423 +0000 UTC m=+0.157703396 container init e03944504efa496146b9ff11567e5150958774156b4a2a637eee582d142dca68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_colden, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  8 05:56:15 np0005475493 podman[165744]: 2025-10-08 09:56:15.967127323 +0000 UTC m=+0.166003276 container start e03944504efa496146b9ff11567e5150958774156b4a2a637eee582d142dca68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_colden, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  8 05:56:15 np0005475493 podman[165744]: 2025-10-08 09:56:15.969895786 +0000 UTC m=+0.168771769 container attach e03944504efa496146b9ff11567e5150958774156b4a2a637eee582d142dca68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_colden, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:56:15 np0005475493 nice_colden[165761]: 167 167
Oct  8 05:56:15 np0005475493 systemd[1]: libpod-e03944504efa496146b9ff11567e5150958774156b4a2a637eee582d142dca68.scope: Deactivated successfully.
Oct  8 05:56:15 np0005475493 podman[165744]: 2025-10-08 09:56:15.975103622 +0000 UTC m=+0.173979585 container died e03944504efa496146b9ff11567e5150958774156b4a2a637eee582d142dca68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct  8 05:56:16 np0005475493 systemd[1]: var-lib-containers-storage-overlay-75eb451f4fb64bdcab62645d2b8822bd80894d07de50b6dc57f33c9aac46b065-merged.mount: Deactivated successfully.
Oct  8 05:56:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:16 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:16 np0005475493 podman[165744]: 2025-10-08 09:56:16.017925585 +0000 UTC m=+0.216801538 container remove e03944504efa496146b9ff11567e5150958774156b4a2a637eee582d142dca68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:56:16 np0005475493 systemd[1]: libpod-conmon-e03944504efa496146b9ff11567e5150958774156b4a2a637eee582d142dca68.scope: Deactivated successfully.
Oct  8 05:56:16 np0005475493 podman[165837]: 2025-10-08 09:56:16.225814562 +0000 UTC m=+0.059542968 container create e645069c862c1da9b473457b5ad140c07f6762e2125836b357279673ca549520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True)
Oct  8 05:56:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v338: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 489 B/s wr, 2 op/s
Oct  8 05:56:16 np0005475493 systemd[1]: Started libpod-conmon-e645069c862c1da9b473457b5ad140c07f6762e2125836b357279673ca549520.scope.
Oct  8 05:56:16 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:56:16 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f698e3dfd969e653e2fab6809402bbeb167644ed6fdf9cb488049d2cd73f6d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:56:16 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f698e3dfd969e653e2fab6809402bbeb167644ed6fdf9cb488049d2cd73f6d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:56:16 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f698e3dfd969e653e2fab6809402bbeb167644ed6fdf9cb488049d2cd73f6d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:56:16 np0005475493 podman[165837]: 2025-10-08 09:56:16.207208025 +0000 UTC m=+0.040936441 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:56:16 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f698e3dfd969e653e2fab6809402bbeb167644ed6fdf9cb488049d2cd73f6d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:56:16 np0005475493 podman[165837]: 2025-10-08 09:56:16.312837386 +0000 UTC m=+0.146565832 container init e645069c862c1da9b473457b5ad140c07f6762e2125836b357279673ca549520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_knuth, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:56:16 np0005475493 podman[165837]: 2025-10-08 09:56:16.319427818 +0000 UTC m=+0.153156234 container start e645069c862c1da9b473457b5ad140c07f6762e2125836b357279673ca549520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_knuth, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct  8 05:56:16 np0005475493 podman[165837]: 2025-10-08 09:56:16.323548586 +0000 UTC m=+0.157277032 container attach e645069c862c1da9b473457b5ad140c07f6762e2125836b357279673ca549520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_knuth, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  8 05:56:16 np0005475493 python3.9[165940]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:56:16 np0005475493 lvm[166058]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 05:56:16 np0005475493 lvm[166058]: VG ceph_vg0 finished
Oct  8 05:56:17 np0005475493 heuristic_knuth[165853]: {}
Oct  8 05:56:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:17.001Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:56:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:17.003Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:56:17 np0005475493 systemd[1]: libpod-e645069c862c1da9b473457b5ad140c07f6762e2125836b357279673ca549520.scope: Deactivated successfully.
Oct  8 05:56:17 np0005475493 podman[165837]: 2025-10-08 09:56:17.033077932 +0000 UTC m=+0.866806328 container died e645069c862c1da9b473457b5ad140c07f6762e2125836b357279673ca549520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_knuth, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  8 05:56:17 np0005475493 systemd[1]: libpod-e645069c862c1da9b473457b5ad140c07f6762e2125836b357279673ca549520.scope: Consumed 1.088s CPU time.
Oct  8 05:56:17 np0005475493 systemd[1]: var-lib-containers-storage-overlay-7f698e3dfd969e653e2fab6809402bbeb167644ed6fdf9cb488049d2cd73f6d9-merged.mount: Deactivated successfully.
Oct  8 05:56:17 np0005475493 podman[165837]: 2025-10-08 09:56:17.084255247 +0000 UTC m=+0.917983643 container remove e645069c862c1da9b473457b5ad140c07f6762e2125836b357279673ca549520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_knuth, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  8 05:56:17 np0005475493 systemd[1]: libpod-conmon-e645069c862c1da9b473457b5ad140c07f6762e2125836b357279673ca549520.scope: Deactivated successfully.
Oct  8 05:56:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:56:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:56:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:56:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:56:17 np0005475493 python3.9[166169]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:56:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:17 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:17 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:17.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:17.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:56:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:56:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:56:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:56:17 np0005475493 podman[166318]: 2025-10-08 09:56:17.908134457 +0000 UTC m=+0.086575919 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  8 05:56:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:18 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:18 np0005475493 python3.9[166364]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:56:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:56:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:56:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:56:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:56:18 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:56:18 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:56:18 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v339: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 489 B/s wr, 2 op/s
Oct  8 05:56:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:56:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:18.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:56:19 np0005475493 python3.9[166522]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:56:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:19 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:19 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:19 np0005475493 python3.9[166675]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:56:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:19.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:56:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:19.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:56:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:20 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:20 np0005475493 python3.9[166828]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:56:20 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v340: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 B/s rd, 0 B/s wr, 0 op/s
Oct  8 05:56:21 np0005475493 python3.9[166980]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:56:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:21 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:21 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:21.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:56:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:21.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:56:21 np0005475493 python3.9[167133]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:56:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:22 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:22 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v341: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 B/s rd, 0 B/s wr, 0 op/s
Oct  8 05:56:22 np0005475493 python3.9[167311]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:56:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:23 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:56:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:23 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:23.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:23 np0005475493 python3.9[167465]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:56:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:23.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:24 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:24 np0005475493 python3.9[167618]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:56:24 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v342: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct  8 05:56:24 np0005475493 python3.9[167770]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:56:25 np0005475493 podman[167923]: 2025-10-08 09:56:25.379641963 +0000 UTC m=+0.051825167 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:56:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:25 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:25 np0005475493 python3.9[167924]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:56:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:25 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:56:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:25.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:56:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:56:25] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Oct  8 05:56:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:56:25] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Oct  8 05:56:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:25.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:26 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:26 np0005475493 python3.9[168095]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:56:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v343: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:56:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:27.004Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:56:27 np0005475493 python3.9[168248]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:56:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:27 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:27 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:56:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:27.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:56:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:27.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:28 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:28 np0005475493 python3.9[168403]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  8 05:56:28 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v344: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:56:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:56:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:28.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:56:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:28.877Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:56:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:28.877Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:56:29 np0005475493 python3.9[168555]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  8 05:56:29 np0005475493 systemd[1]: Reloading.
Oct  8 05:56:29 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:56:29 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:56:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:29 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a60000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:29 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:29.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:56:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:29.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:56:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:30 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:30 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v345: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:56:30 np0005475493 python3.9[168745]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:56:30 np0005475493 python3.9[168898]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:56:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:31 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:31 np0005475493 python3.9[169052]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:56:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:31 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:31.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:31.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:32 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:32 np0005475493 python3.9[169206]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:56:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v346: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:56:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:56:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:56:32 np0005475493 python3.9[169359]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:56:33 np0005475493 python3.9[169513]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:56:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:33 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:56:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:33 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:33.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:33.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:34 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a60001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:34 np0005475493 python3.9[169669]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:56:34 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v347: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct  8 05:56:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:35 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:35 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:35 np0005475493 python3.9[169823]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct  8 05:56:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:35.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:56:35] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct  8 05:56:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:56:35] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct  8 05:56:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:35.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:36 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:36 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v348: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 05:56:36 np0005475493 python3.9[169977]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  8 05:56:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:37.004Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:56:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:37.005Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:56:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095637 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 05:56:37 np0005475493 python3.9[170136]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  8 05:56:37 np0005475493 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  8 05:56:37 np0005475493 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  8 05:56:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:37 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a600022e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:37 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a340016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:37.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:56:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:37.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:56:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:38 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:38 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v349: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 05:56:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:56:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:38.878Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:56:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:38.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:56:38 np0005475493 python3.9[170298]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  8 05:56:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:39 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:39 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a600022e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:39.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:39 np0005475493 python3.9[170383]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  8 05:56:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:39.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:40 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a340016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v350: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 05:56:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:41 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:41 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:41.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:41.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:42 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a60003190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v351: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 05:56:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:43 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a340016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:56:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:43 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003de0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:43.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:43.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:44 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003de0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v352: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Oct  8 05:56:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:45 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:45 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a60003190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:56:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:45.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:56:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:56:45] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct  8 05:56:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:56:45] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct  8 05:56:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:45.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:46 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:46 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:56:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v353: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct  8 05:56:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:47.005Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:56:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:47.006Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:56:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:47 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:47 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:47.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:56:47
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['.mgr', 'backups', 'cephfs.cephfs.data', '.nfs', '.rgw.root', 'volumes', 'vms', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'images']
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 05:56:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:56:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:56:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:47.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:56:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:56:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:48 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a60003190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:56:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:56:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:56:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:56:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 05:56:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:56:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:56:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:56:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:56:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v354: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct  8 05:56:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:56:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:48.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:56:48 np0005475493 podman[170430]: 2025-10-08 09:56:48.920559331 +0000 UTC m=+0.084907236 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  8 05:56:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:49 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 05:56:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:49 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:56:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:49 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:49 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:49.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:56:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:49.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:56:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:50 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:50 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v355: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Oct  8 05:56:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:51 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:51 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:56:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:51.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:56:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:56:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:51.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:56:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:52 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:52 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 05:56:52 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v356: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Oct  8 05:56:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:53 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:56:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:53 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:53.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:56:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:53.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:56:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:54 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:54 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v357: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 05:56:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:55 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:55 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:56:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:55.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:56:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:56:55] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 05:56:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:56:55] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 05:56:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:56:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:55.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:56:55 np0005475493 podman[170637]: 2025-10-08 09:56:55.919528604 +0000 UTC m=+0.068177622 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  8 05:56:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:56 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:56 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v358: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct  8 05:56:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:57.007Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:56:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:56:57.395 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 05:56:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:56:57.396 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 05:56:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:56:57.396 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 05:56:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:57 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:57 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:56:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:57.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:56:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:57.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:58 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:58 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v359: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct  8 05:56:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:56:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:56:58.882Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:56:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095659 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 05:56:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:59 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:56:59 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:56:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:56:59.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:56:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:56:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:56:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:56:59.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:00 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:00 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v360: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 05:57:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:01 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:01 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:01.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:01.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:02 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c003fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:02 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v361: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct  8 05:57:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:57:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:57:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:03 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:57:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:03 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:57:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:03.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:57:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:57:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:03.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:57:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:04 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:04 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v362: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct  8 05:57:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:05 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a40002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:05 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:05.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:57:05] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 05:57:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:57:05] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 05:57:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:05.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:06 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:06 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v363: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct  8 05:57:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:07.007Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:57:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:07.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:57:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:07 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:07 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:07.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:07.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:08 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:08 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v364: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct  8 05:57:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:57:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:08.883Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:57:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:08.883Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:57:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:08.883Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:57:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:09 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:09 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:09.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:09.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:10 np0005475493 kernel: SELinux:  Converting 2772 SID table entries...
Oct  8 05:57:10 np0005475493 kernel: SELinux:  policy capability network_peer_controls=1
Oct  8 05:57:10 np0005475493 kernel: SELinux:  policy capability open_perms=1
Oct  8 05:57:10 np0005475493 kernel: SELinux:  policy capability extended_socket_class=1
Oct  8 05:57:10 np0005475493 kernel: SELinux:  policy capability always_check_network=0
Oct  8 05:57:10 np0005475493 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  8 05:57:10 np0005475493 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  8 05:57:10 np0005475493 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  8 05:57:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:10 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c004020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:10 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v365: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct  8 05:57:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:11 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:11 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:11.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:11.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:12 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v366: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:57:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:13 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c004040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:57:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:13 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a6c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:57:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:13.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:57:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:13.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:14 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a58002c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v367: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct  8 05:57:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:15 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a34004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:15 np0005475493 kernel: ganesha.nfsd[159522]: segfault at 50 ip 00007f8b1838a32e sp 00007f8ad4ff8210 error 4 in libntirpc.so.5.8[7f8b1836f000+2c000] likely on CPU 6 (core 0, socket 6)
Oct  8 05:57:15 np0005475493 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct  8 05:57:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[156687]: 08/10/2025 09:57:15 : epoch 68e634fe : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a3c004060 fd 39 proxy ignored for local
Oct  8 05:57:15 np0005475493 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct  8 05:57:15 np0005475493 systemd[1]: Started Process Core Dump (PID 170717/UID 0).
Oct  8 05:57:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:15.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:57:15] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 05:57:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:57:15] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 05:57:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:57:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:15.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:57:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v368: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:57:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:17.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:57:17 np0005475493 systemd-coredump[170718]: Process 156719 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 53:#012#0  0x00007f8b1838a32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct  8 05:57:17 np0005475493 systemd[1]: systemd-coredump@3-170717-0.service: Deactivated successfully.
Oct  8 05:57:17 np0005475493 systemd[1]: systemd-coredump@3-170717-0.service: Consumed 1.491s CPU time.
Oct  8 05:57:17 np0005475493 podman[170725]: 2025-10-08 09:57:17.233390548 +0000 UTC m=+0.026841050 container died c427e6c11e062f9636a45bc767e0a3cb951225f9153c622ac9e9b72d859be25e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  8 05:57:17 np0005475493 systemd[1]: var-lib-containers-storage-overlay-6201882d2556a974402ebedf55dc29af345432908f34a2728ce3c7ef9e499676-merged.mount: Deactivated successfully.
Oct  8 05:57:17 np0005475493 podman[170725]: 2025-10-08 09:57:17.285468345 +0000 UTC m=+0.078918827 container remove c427e6c11e062f9636a45bc767e0a3cb951225f9153c622ac9e9b72d859be25e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct  8 05:57:17 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Main process exited, code=exited, status=139/n/a
Oct  8 05:57:17 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Failed with result 'exit-code'.
Oct  8 05:57:17 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.425s CPU time.
Oct  8 05:57:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:57:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:17.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:57:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:57:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:57:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:57:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:57:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:17.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:57:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:57:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:57:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:57:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:57:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:57:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 05:57:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:57:18 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v369: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 256 B/s rd, 0 op/s
Oct  8 05:57:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 05:57:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:57:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 05:57:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:57:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 05:57:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 05:57:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 05:57:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:57:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:57:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:57:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:57:18 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:57:18 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:57:18 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:57:18 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:57:18 np0005475493 podman[170942]: 2025-10-08 09:57:18.823970135 +0000 UTC m=+0.046434240 container create 91102436dd3052d15a24ef7661b88ef3354c7ac788e6b58a58d3c940220e4387 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_jennings, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  8 05:57:18 np0005475493 systemd[1]: Started libpod-conmon-91102436dd3052d15a24ef7661b88ef3354c7ac788e6b58a58d3c940220e4387.scope.
Oct  8 05:57:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:18.884Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:57:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:18.886Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:57:18 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:57:18 np0005475493 podman[170942]: 2025-10-08 09:57:18.803710233 +0000 UTC m=+0.026174348 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:57:18 np0005475493 podman[170942]: 2025-10-08 09:57:18.905700365 +0000 UTC m=+0.128164500 container init 91102436dd3052d15a24ef7661b88ef3354c7ac788e6b58a58d3c940220e4387 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_jennings, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  8 05:57:18 np0005475493 podman[170942]: 2025-10-08 09:57:18.912624874 +0000 UTC m=+0.135088989 container start 91102436dd3052d15a24ef7661b88ef3354c7ac788e6b58a58d3c940220e4387 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_jennings, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct  8 05:57:18 np0005475493 podman[170942]: 2025-10-08 09:57:18.916915937 +0000 UTC m=+0.139380072 container attach 91102436dd3052d15a24ef7661b88ef3354c7ac788e6b58a58d3c940220e4387 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_jennings, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  8 05:57:18 np0005475493 magical_jennings[170958]: 167 167
Oct  8 05:57:18 np0005475493 systemd[1]: libpod-91102436dd3052d15a24ef7661b88ef3354c7ac788e6b58a58d3c940220e4387.scope: Deactivated successfully.
Oct  8 05:57:18 np0005475493 podman[170942]: 2025-10-08 09:57:18.927348772 +0000 UTC m=+0.149812887 container died 91102436dd3052d15a24ef7661b88ef3354c7ac788e6b58a58d3c940220e4387 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:57:18 np0005475493 systemd[1]: var-lib-containers-storage-overlay-c3d629ac13b1223662e4c3b1f4553eec34810d92664d6e27145fe5437b9a979d-merged.mount: Deactivated successfully.
Oct  8 05:57:18 np0005475493 podman[170942]: 2025-10-08 09:57:18.979016365 +0000 UTC m=+0.201480480 container remove 91102436dd3052d15a24ef7661b88ef3354c7ac788e6b58a58d3c940220e4387 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_jennings, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:57:18 np0005475493 systemd[1]: libpod-conmon-91102436dd3052d15a24ef7661b88ef3354c7ac788e6b58a58d3c940220e4387.scope: Deactivated successfully.
Oct  8 05:57:19 np0005475493 podman[170965]: 2025-10-08 09:57:19.069640149 +0000 UTC m=+0.113702380 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  8 05:57:19 np0005475493 podman[171011]: 2025-10-08 09:57:19.151971168 +0000 UTC m=+0.045103626 container create b0fab86044b78d5087085d61491a3d7c11c645d34dc3a121473ec9baeaf0faab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct  8 05:57:19 np0005475493 systemd[1]: Started libpod-conmon-b0fab86044b78d5087085d61491a3d7c11c645d34dc3a121473ec9baeaf0faab.scope.
Oct  8 05:57:19 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:57:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1f641f53dae2a29456bba149910d78aa32ef47860f0f05c03ce6664f347090/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:57:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1f641f53dae2a29456bba149910d78aa32ef47860f0f05c03ce6664f347090/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:57:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1f641f53dae2a29456bba149910d78aa32ef47860f0f05c03ce6664f347090/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:57:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1f641f53dae2a29456bba149910d78aa32ef47860f0f05c03ce6664f347090/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:57:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1f641f53dae2a29456bba149910d78aa32ef47860f0f05c03ce6664f347090/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:57:19 np0005475493 podman[171011]: 2025-10-08 09:57:19.136159305 +0000 UTC m=+0.029291793 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:57:19 np0005475493 podman[171011]: 2025-10-08 09:57:19.248922532 +0000 UTC m=+0.142055040 container init b0fab86044b78d5087085d61491a3d7c11c645d34dc3a121473ec9baeaf0faab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_heisenberg, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:57:19 np0005475493 podman[171011]: 2025-10-08 09:57:19.25909353 +0000 UTC m=+0.152226008 container start b0fab86044b78d5087085d61491a3d7c11c645d34dc3a121473ec9baeaf0faab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:57:19 np0005475493 podman[171011]: 2025-10-08 09:57:19.289155536 +0000 UTC m=+0.182288044 container attach b0fab86044b78d5087085d61491a3d7c11c645d34dc3a121473ec9baeaf0faab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_heisenberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct  8 05:57:19 np0005475493 peaceful_heisenberg[171028]: --> passed data devices: 0 physical, 1 LVM
Oct  8 05:57:19 np0005475493 peaceful_heisenberg[171028]: --> All data devices are unavailable
Oct  8 05:57:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:57:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:19.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:57:19 np0005475493 systemd[1]: libpod-b0fab86044b78d5087085d61491a3d7c11c645d34dc3a121473ec9baeaf0faab.scope: Deactivated successfully.
Oct  8 05:57:19 np0005475493 podman[171011]: 2025-10-08 09:57:19.650498895 +0000 UTC m=+0.543631363 container died b0fab86044b78d5087085d61491a3d7c11c645d34dc3a121473ec9baeaf0faab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_heisenberg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  8 05:57:19 np0005475493 systemd[1]: var-lib-containers-storage-overlay-1a1f641f53dae2a29456bba149910d78aa32ef47860f0f05c03ce6664f347090-merged.mount: Deactivated successfully.
Oct  8 05:57:19 np0005475493 podman[171011]: 2025-10-08 09:57:19.69925083 +0000 UTC m=+0.592383298 container remove b0fab86044b78d5087085d61491a3d7c11c645d34dc3a121473ec9baeaf0faab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_heisenberg, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  8 05:57:19 np0005475493 systemd[1]: libpod-conmon-b0fab86044b78d5087085d61491a3d7c11c645d34dc3a121473ec9baeaf0faab.scope: Deactivated successfully.
Oct  8 05:57:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:19.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:20 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v370: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 342 B/s rd, 0 op/s
Oct  8 05:57:20 np0005475493 podman[171146]: 2025-10-08 09:57:20.331460348 +0000 UTC m=+0.047802005 container create 8f1de58e88a76fb584c68d6b39fff8ccbae41785ec9406c254080c7ba8ae85c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_curran, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:57:20 np0005475493 systemd[1]: Started libpod-conmon-8f1de58e88a76fb584c68d6b39fff8ccbae41785ec9406c254080c7ba8ae85c1.scope.
Oct  8 05:57:20 np0005475493 podman[171146]: 2025-10-08 09:57:20.308606181 +0000 UTC m=+0.024947838 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:57:20 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:57:20 np0005475493 podman[171146]: 2025-10-08 09:57:20.433648525 +0000 UTC m=+0.149990152 container init 8f1de58e88a76fb584c68d6b39fff8ccbae41785ec9406c254080c7ba8ae85c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_curran, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:57:20 np0005475493 podman[171146]: 2025-10-08 09:57:20.442992855 +0000 UTC m=+0.159334472 container start 8f1de58e88a76fb584c68d6b39fff8ccbae41785ec9406c254080c7ba8ae85c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  8 05:57:20 np0005475493 podman[171146]: 2025-10-08 09:57:20.446830532 +0000 UTC m=+0.163172159 container attach 8f1de58e88a76fb584c68d6b39fff8ccbae41785ec9406c254080c7ba8ae85c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_curran, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:57:20 np0005475493 practical_curran[171162]: 167 167
Oct  8 05:57:20 np0005475493 systemd[1]: libpod-8f1de58e88a76fb584c68d6b39fff8ccbae41785ec9406c254080c7ba8ae85c1.scope: Deactivated successfully.
Oct  8 05:57:20 np0005475493 podman[171146]: 2025-10-08 09:57:20.45128706 +0000 UTC m=+0.167628687 container died 8f1de58e88a76fb584c68d6b39fff8ccbae41785ec9406c254080c7ba8ae85c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:57:20 np0005475493 systemd[1]: var-lib-containers-storage-overlay-e22209959425173c155f0f5c87883d9c36f56449868a1336c880879342e88aa3-merged.mount: Deactivated successfully.
Oct  8 05:57:20 np0005475493 podman[171146]: 2025-10-08 09:57:20.488897467 +0000 UTC m=+0.205239084 container remove 8f1de58e88a76fb584c68d6b39fff8ccbae41785ec9406c254080c7ba8ae85c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:57:20 np0005475493 systemd[1]: libpod-conmon-8f1de58e88a76fb584c68d6b39fff8ccbae41785ec9406c254080c7ba8ae85c1.scope: Deactivated successfully.
Oct  8 05:57:20 np0005475493 podman[171185]: 2025-10-08 09:57:20.688141832 +0000 UTC m=+0.048167218 container create 5a83dcb212d7c5f9b1bf402ab8ebc7c105d51d85f118bcb2f99d9998d218af61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  8 05:57:20 np0005475493 systemd[1]: Started libpod-conmon-5a83dcb212d7c5f9b1bf402ab8ebc7c105d51d85f118bcb2f99d9998d218af61.scope.
Oct  8 05:57:20 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:57:20 np0005475493 podman[171185]: 2025-10-08 09:57:20.670172987 +0000 UTC m=+0.030198383 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:57:20 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cceccf25527cb5df039b2bddca3b5a22d1f8bc6279b00a6da393b28371112e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:57:20 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cceccf25527cb5df039b2bddca3b5a22d1f8bc6279b00a6da393b28371112e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:57:20 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cceccf25527cb5df039b2bddca3b5a22d1f8bc6279b00a6da393b28371112e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:57:20 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cceccf25527cb5df039b2bddca3b5a22d1f8bc6279b00a6da393b28371112e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:57:20 np0005475493 podman[171185]: 2025-10-08 09:57:20.790215176 +0000 UTC m=+0.150240612 container init 5a83dcb212d7c5f9b1bf402ab8ebc7c105d51d85f118bcb2f99d9998d218af61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:57:20 np0005475493 podman[171185]: 2025-10-08 09:57:20.801442648 +0000 UTC m=+0.161468044 container start 5a83dcb212d7c5f9b1bf402ab8ebc7c105d51d85f118bcb2f99d9998d218af61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_yalow, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:57:20 np0005475493 podman[171185]: 2025-10-08 09:57:20.807761278 +0000 UTC m=+0.167786724 container attach 5a83dcb212d7c5f9b1bf402ab8ebc7c105d51d85f118bcb2f99d9998d218af61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_yalow, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]: {
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:    "1": [
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:        {
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:            "devices": [
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:                "/dev/loop3"
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:            ],
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:            "lv_name": "ceph_lv0",
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:            "lv_size": "21470642176",
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:            "name": "ceph_lv0",
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:            "tags": {
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:                "ceph.cephx_lockbox_secret": "",
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:                "ceph.cluster_name": "ceph",
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:                "ceph.crush_device_class": "",
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:                "ceph.encrypted": "0",
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:                "ceph.osd_id": "1",
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:                "ceph.type": "block",
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:                "ceph.vdo": "0",
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:                "ceph.with_tpm": "0"
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:            },
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:            "type": "block",
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:            "vg_name": "ceph_vg0"
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:        }
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]:    ]
Oct  8 05:57:21 np0005475493 jovial_yalow[171202]: }
Oct  8 05:57:21 np0005475493 systemd[1]: libpod-5a83dcb212d7c5f9b1bf402ab8ebc7c105d51d85f118bcb2f99d9998d218af61.scope: Deactivated successfully.
Oct  8 05:57:21 np0005475493 podman[171185]: 2025-10-08 09:57:21.126314027 +0000 UTC m=+0.486339463 container died 5a83dcb212d7c5f9b1bf402ab8ebc7c105d51d85f118bcb2f99d9998d218af61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_yalow, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  8 05:57:21 np0005475493 systemd[1]: var-lib-containers-storage-overlay-5cceccf25527cb5df039b2bddca3b5a22d1f8bc6279b00a6da393b28371112e9-merged.mount: Deactivated successfully.
Oct  8 05:57:21 np0005475493 podman[171185]: 2025-10-08 09:57:21.404149658 +0000 UTC m=+0.764175044 container remove 5a83dcb212d7c5f9b1bf402ab8ebc7c105d51d85f118bcb2f99d9998d218af61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_yalow, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:57:21 np0005475493 systemd[1]: libpod-conmon-5a83dcb212d7c5f9b1bf402ab8ebc7c105d51d85f118bcb2f99d9998d218af61.scope: Deactivated successfully.
Oct  8 05:57:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095721 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 05:57:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:57:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:21.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:57:21 np0005475493 kernel: SELinux:  Converting 2772 SID table entries...
Oct  8 05:57:21 np0005475493 kernel: SELinux:  policy capability network_peer_controls=1
Oct  8 05:57:21 np0005475493 kernel: SELinux:  policy capability open_perms=1
Oct  8 05:57:21 np0005475493 kernel: SELinux:  policy capability extended_socket_class=1
Oct  8 05:57:21 np0005475493 kernel: SELinux:  policy capability always_check_network=0
Oct  8 05:57:21 np0005475493 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  8 05:57:21 np0005475493 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  8 05:57:21 np0005475493 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  8 05:57:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:57:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:21.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:57:21 np0005475493 podman[171324]: 2025-10-08 09:57:21.976323234 +0000 UTC m=+0.043621106 container create 8130a0098996276444d2937087395990bc082ec9f6d09fe00dd06bb6e9e1c8c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:57:21 np0005475493 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct  8 05:57:22 np0005475493 systemd[1]: Started libpod-conmon-8130a0098996276444d2937087395990bc082ec9f6d09fe00dd06bb6e9e1c8c0.scope.
Oct  8 05:57:22 np0005475493 podman[171324]: 2025-10-08 09:57:21.957010404 +0000 UTC m=+0.024308296 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:57:22 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:57:22 np0005475493 podman[171324]: 2025-10-08 09:57:22.07967915 +0000 UTC m=+0.146977042 container init 8130a0098996276444d2937087395990bc082ec9f6d09fe00dd06bb6e9e1c8c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_noether, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:57:22 np0005475493 podman[171324]: 2025-10-08 09:57:22.085676139 +0000 UTC m=+0.152974011 container start 8130a0098996276444d2937087395990bc082ec9f6d09fe00dd06bb6e9e1c8c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_noether, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  8 05:57:22 np0005475493 podman[171324]: 2025-10-08 09:57:22.089367572 +0000 UTC m=+0.156665464 container attach 8130a0098996276444d2937087395990bc082ec9f6d09fe00dd06bb6e9e1c8c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:57:22 np0005475493 vigilant_noether[171341]: 167 167
Oct  8 05:57:22 np0005475493 systemd[1]: libpod-8130a0098996276444d2937087395990bc082ec9f6d09fe00dd06bb6e9e1c8c0.scope: Deactivated successfully.
Oct  8 05:57:22 np0005475493 podman[171324]: 2025-10-08 09:57:22.092384182 +0000 UTC m=+0.159682074 container died 8130a0098996276444d2937087395990bc082ec9f6d09fe00dd06bb6e9e1c8c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct  8 05:57:22 np0005475493 systemd[1]: var-lib-containers-storage-overlay-07a0647b039b8f2a990a9f3157f6e0bad85a1eea48655bb58ecf2ba4e5fe80ea-merged.mount: Deactivated successfully.
Oct  8 05:57:22 np0005475493 podman[171324]: 2025-10-08 09:57:22.131577401 +0000 UTC m=+0.198875273 container remove 8130a0098996276444d2937087395990bc082ec9f6d09fe00dd06bb6e9e1c8c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:57:22 np0005475493 systemd[1]: libpod-conmon-8130a0098996276444d2937087395990bc082ec9f6d09fe00dd06bb6e9e1c8c0.scope: Deactivated successfully.
Oct  8 05:57:22 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v371: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 256 B/s rd, 0 op/s
Oct  8 05:57:22 np0005475493 podman[171364]: 2025-10-08 09:57:22.284923614 +0000 UTC m=+0.037814865 container create bab95bf684e184260b950d7a4cf418ea358f6961103ac0081d77ad43af7ae4e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:57:22 np0005475493 systemd[1]: Started libpod-conmon-bab95bf684e184260b950d7a4cf418ea358f6961103ac0081d77ad43af7ae4e4.scope.
Oct  8 05:57:22 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:57:22 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b919cfb635d8aef8a4906492d205f5909031f2b9a837d8239d642baedbd3658/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:57:22 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b919cfb635d8aef8a4906492d205f5909031f2b9a837d8239d642baedbd3658/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:57:22 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b919cfb635d8aef8a4906492d205f5909031f2b9a837d8239d642baedbd3658/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:57:22 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b919cfb635d8aef8a4906492d205f5909031f2b9a837d8239d642baedbd3658/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:57:22 np0005475493 podman[171364]: 2025-10-08 09:57:22.26730154 +0000 UTC m=+0.020192821 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:57:22 np0005475493 podman[171364]: 2025-10-08 09:57:22.38072623 +0000 UTC m=+0.133617501 container init bab95bf684e184260b950d7a4cf418ea358f6961103ac0081d77ad43af7ae4e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:57:22 np0005475493 podman[171364]: 2025-10-08 09:57:22.391119015 +0000 UTC m=+0.144010276 container start bab95bf684e184260b950d7a4cf418ea358f6961103ac0081d77ad43af7ae4e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:57:22 np0005475493 podman[171364]: 2025-10-08 09:57:22.395976805 +0000 UTC m=+0.148868066 container attach bab95bf684e184260b950d7a4cf418ea358f6961103ac0081d77ad43af7ae4e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bhabha, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:57:22 np0005475493 lvm[171479]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 05:57:22 np0005475493 lvm[171479]: VG ceph_vg0 finished
Oct  8 05:57:23 np0005475493 trusting_bhabha[171380]: {}
Oct  8 05:57:23 np0005475493 lvm[171483]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 05:57:23 np0005475493 lvm[171483]: VG ceph_vg0 finished
Oct  8 05:57:23 np0005475493 systemd[1]: libpod-bab95bf684e184260b950d7a4cf418ea358f6961103ac0081d77ad43af7ae4e4.scope: Deactivated successfully.
Oct  8 05:57:23 np0005475493 podman[171364]: 2025-10-08 09:57:23.090134376 +0000 UTC m=+0.843025677 container died bab95bf684e184260b950d7a4cf418ea358f6961103ac0081d77ad43af7ae4e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bhabha, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:57:23 np0005475493 systemd[1]: libpod-bab95bf684e184260b950d7a4cf418ea358f6961103ac0081d77ad43af7ae4e4.scope: Consumed 1.104s CPU time.
Oct  8 05:57:23 np0005475493 systemd[1]: var-lib-containers-storage-overlay-6b919cfb635d8aef8a4906492d205f5909031f2b9a837d8239d642baedbd3658-merged.mount: Deactivated successfully.
Oct  8 05:57:23 np0005475493 podman[171364]: 2025-10-08 09:57:23.14753891 +0000 UTC m=+0.900430171 container remove bab95bf684e184260b950d7a4cf418ea358f6961103ac0081d77ad43af7ae4e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bhabha, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Oct  8 05:57:23 np0005475493 systemd[1]: libpod-conmon-bab95bf684e184260b950d7a4cf418ea358f6961103ac0081d77ad43af7ae4e4.scope: Deactivated successfully.
Oct  8 05:57:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:57:23 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:57:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:57:23 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:57:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:57:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:57:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:23.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:57:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:23.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:24 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v372: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 342 B/s rd, 0 op/s
Oct  8 05:57:24 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:57:24 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:57:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:57:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:25.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:57:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:57:25] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Oct  8 05:57:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:57:25] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Oct  8 05:57:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:25.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v373: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 171 B/s rd, 0 op/s
Oct  8 05:57:26 np0005475493 podman[171525]: 2025-10-08 09:57:26.914001926 +0000 UTC m=+0.071827032 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  8 05:57:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:27.010Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:57:27 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Scheduled restart job, restart counter is at 4.
Oct  8 05:57:27 np0005475493 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:57:27 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.425s CPU time.
Oct  8 05:57:27 np0005475493 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:57:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:27.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:27 np0005475493 podman[171595]: 2025-10-08 09:57:27.783151638 +0000 UTC m=+0.019107824 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:57:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:27.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:28 np0005475493 podman[171595]: 2025-10-08 09:57:28.006321186 +0000 UTC m=+0.242277352 container create 6e3f2bf17063e42f526444bce3d228dd80b89337a4330c408305990317c0e676 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  8 05:57:28 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56aa7bd87c581fd0af616dc67fc3157442e4b37bcac452af83d16c25e948e62c/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  8 05:57:28 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56aa7bd87c581fd0af616dc67fc3157442e4b37bcac452af83d16c25e948e62c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:57:28 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56aa7bd87c581fd0af616dc67fc3157442e4b37bcac452af83d16c25e948e62c/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:57:28 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56aa7bd87c581fd0af616dc67fc3157442e4b37bcac452af83d16c25e948e62c/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.uynkmx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:57:28 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v374: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 171 B/s rd, 0 op/s
Oct  8 05:57:28 np0005475493 podman[171595]: 2025-10-08 09:57:28.296880338 +0000 UTC m=+0.532836514 container init 6e3f2bf17063e42f526444bce3d228dd80b89337a4330c408305990317c0e676 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct  8 05:57:28 np0005475493 podman[171595]: 2025-10-08 09:57:28.301975337 +0000 UTC m=+0.537931533 container start 6e3f2bf17063e42f526444bce3d228dd80b89337a4330c408305990317c0e676 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:57:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:28 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  8 05:57:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:28 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  8 05:57:28 np0005475493 bash[171595]: 6e3f2bf17063e42f526444bce3d228dd80b89337a4330c408305990317c0e676
Oct  8 05:57:28 np0005475493 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:57:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:28 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  8 05:57:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:28 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  8 05:57:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:28 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  8 05:57:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:28 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  8 05:57:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:28 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  8 05:57:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:57:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:28 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:57:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:28.887Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:57:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:29.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:29.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:30 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v375: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct  8 05:57:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:57:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:31.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:57:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:57:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:31.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:57:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v376: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:57:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:57:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:57:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:57:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 05:57:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:33.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 05:57:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:33.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:34 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v377: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct  8 05:57:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:34 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 05:57:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:34 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:57:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:35.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:57:35] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  8 05:57:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:57:35] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  8 05:57:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:35.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:36 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v378: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct  8 05:57:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:37.011Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:57:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:37.012Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:57:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:37.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:37.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:38 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v379: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct  8 05:57:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:57:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:38.888Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:57:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:38.889Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:57:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:57:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:39.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:57:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:39.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v380: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct  8 05:57:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:40 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  8 05:57:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:41 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0424000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:41 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:57:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:41.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:57:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:41.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:42 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v381: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 05:57:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:57:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:43 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095743 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 05:57:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:43 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0414001230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:43.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:43.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:44 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0400000d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v382: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 05:57:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:45 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:45 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03fc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:45.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:57:45] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  8 05:57:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:57:45] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  8 05:57:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:45.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:46 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0414001d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v383: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct  8 05:57:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:47.013Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:57:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:47 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0400001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:57:47 np0005475493 kernel: ganesha.nfsd[176470]: segfault at 50 ip 00007f04d3a8e32e sp 00007f049e7fb210 error 4 in libntirpc.so.5.8[7f04d3a73000+2c000] likely on CPU 6 (core 0, socket 6)
Oct  8 05:57:47 np0005475493 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct  8 05:57:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[171611]: 08/10/2025 09:57:47 : epoch 68e63588 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420002f20 fd 39 proxy ignored for local
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:57:47
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['default.rgw.log', 'images', 'vms', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', '.mgr', '.nfs', '.rgw.root', 'cephfs.cephfs.data', 'backups']
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 05:57:47 np0005475493 systemd[1]: Started Process Core Dump (PID 180972/UID 0).
Oct  8 05:57:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:47.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 05:57:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:57:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:57:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:47.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:57:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:57:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:57:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:57:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:57:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:57:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 05:57:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:57:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:57:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:57:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:57:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v384: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct  8 05:57:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:57:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:48.889Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:57:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:49.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:49 np0005475493 ceph-mgr[73869]: [devicehealth INFO root] Check health
Oct  8 05:57:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:49.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:49 np0005475493 podman[182508]: 2025-10-08 09:57:49.931863713 +0000 UTC m=+0.097405402 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  8 05:57:50 np0005475493 systemd-coredump[180985]: Process 171615 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 42:#012#0  0x00007f04d3a8e32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct  8 05:57:50 np0005475493 systemd[1]: systemd-coredump@4-180972-0.service: Deactivated successfully.
Oct  8 05:57:50 np0005475493 systemd[1]: systemd-coredump@4-180972-0.service: Consumed 1.206s CPU time.
Oct  8 05:57:50 np0005475493 podman[182710]: 2025-10-08 09:57:50.151360971 +0000 UTC m=+0.033865955 container died 6e3f2bf17063e42f526444bce3d228dd80b89337a4330c408305990317c0e676 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:57:50 np0005475493 systemd[1]: var-lib-containers-storage-overlay-56aa7bd87c581fd0af616dc67fc3157442e4b37bcac452af83d16c25e948e62c-merged.mount: Deactivated successfully.
Oct  8 05:57:50 np0005475493 podman[182710]: 2025-10-08 09:57:50.191527646 +0000 UTC m=+0.074032620 container remove 6e3f2bf17063e42f526444bce3d228dd80b89337a4330c408305990317c0e676 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:57:50 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Main process exited, code=exited, status=139/n/a
Oct  8 05:57:50 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v385: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Oct  8 05:57:50 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Failed with result 'exit-code'.
Oct  8 05:57:50 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.384s CPU time.
Oct  8 05:57:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:51.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:51.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:52 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v386: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct  8 05:57:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:57:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:53.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:53.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:54 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v387: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct  8 05:57:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095755 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 05:57:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:55.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:57:55] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct  8 05:57:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:57:55] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct  8 05:57:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:55.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:56 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v388: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 05:57:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:57.014Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:57:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:57.014Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:57:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:57:57.396 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 05:57:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:57:57.396 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 05:57:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:57:57.396 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 05:57:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:57.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:57 np0005475493 podman[187617]: 2025-10-08 09:57:57.890261807 +0000 UTC m=+0.044167389 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  8 05:57:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:57.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:58 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v389: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 05:57:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:57:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:57:58.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:57:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:57:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:57:59.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:57:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:57:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:57:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:57:59.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:58:00 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v390: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct  8 05:58:00 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Scheduled restart job, restart counter is at 5.
Oct  8 05:58:00 np0005475493 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:58:00 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.384s CPU time.
Oct  8 05:58:00 np0005475493 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:58:00 np0005475493 podman[188603]: 2025-10-08 09:58:00.842182069 +0000 UTC m=+0.051561447 container create c7ddf9eb043b2f4319271f9f52568ea96e1aa7b542b5cb278857b11c92e1ddaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  8 05:58:00 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b305198b3d8efc9db631e903d993aee68b48b62f795ac868089a526f78c5ea29/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  8 05:58:00 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b305198b3d8efc9db631e903d993aee68b48b62f795ac868089a526f78c5ea29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:58:00 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b305198b3d8efc9db631e903d993aee68b48b62f795ac868089a526f78c5ea29/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:58:00 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b305198b3d8efc9db631e903d993aee68b48b62f795ac868089a526f78c5ea29/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.uynkmx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:58:00 np0005475493 podman[188603]: 2025-10-08 09:58:00.820295546 +0000 UTC m=+0.029674934 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:58:00 np0005475493 podman[188603]: 2025-10-08 09:58:00.917106268 +0000 UTC m=+0.126485726 container init c7ddf9eb043b2f4319271f9f52568ea96e1aa7b542b5cb278857b11c92e1ddaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:58:00 np0005475493 podman[188603]: 2025-10-08 09:58:00.922025622 +0000 UTC m=+0.131405000 container start c7ddf9eb043b2f4319271f9f52568ea96e1aa7b542b5cb278857b11c92e1ddaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  8 05:58:00 np0005475493 bash[188603]: c7ddf9eb043b2f4319271f9f52568ea96e1aa7b542b5cb278857b11c92e1ddaa
Oct  8 05:58:00 np0005475493 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:58:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:00 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  8 05:58:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:00 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  8 05:58:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:00 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  8 05:58:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:00 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  8 05:58:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:00 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  8 05:58:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:00 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  8 05:58:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:01 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  8 05:58:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:01 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:58:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:01.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:01.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:02 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v391: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 05:58:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:58:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:58:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:58:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:58:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:03.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:58:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:58:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:03.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:58:04 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v392: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct  8 05:58:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:05.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:58:05] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  8 05:58:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:58:05] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  8 05:58:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:05.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:06 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v393: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct  8 05:58:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:07.015Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:58:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:07.016Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:58:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:07.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:58:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:07 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 05:58:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:07 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:58:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:58:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:07.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:58:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:07.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:08 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v394: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct  8 05:58:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:58:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:08.891Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:58:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:08.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:58:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:08.893Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:58:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=sqlstore.transactions t=2025-10-08T09:58:09.447156599Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Oct  8 05:58:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=cleanup t=2025-10-08T09:58:09.461117916Z level=info msg="Completed cleanup jobs" duration=25.299387ms
Oct  8 05:58:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=plugins.update.checker t=2025-10-08T09:58:09.565954305Z level=info msg="Update check succeeded" duration=53.886013ms
Oct  8 05:58:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=grafana.update.checker t=2025-10-08T09:58:09.567629652Z level=info msg="Update check succeeded" duration=55.616922ms
Oct  8 05:58:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:09.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:09.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:10 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v395: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 05:58:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:58:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:11.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:58:11 np0005475493 kernel: SELinux:  Converting 2773 SID table entries...
Oct  8 05:58:11 np0005475493 kernel: SELinux:  policy capability network_peer_controls=1
Oct  8 05:58:11 np0005475493 kernel: SELinux:  policy capability open_perms=1
Oct  8 05:58:11 np0005475493 kernel: SELinux:  policy capability extended_socket_class=1
Oct  8 05:58:11 np0005475493 kernel: SELinux:  policy capability always_check_network=0
Oct  8 05:58:11 np0005475493 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  8 05:58:11 np0005475493 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  8 05:58:11 np0005475493 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  8 05:58:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:11.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v396: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 05:58:13 np0005475493 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Oct  8 05:58:13 np0005475493 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct  8 05:58:13 np0005475493 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  8 05:58:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5e0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:58:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:13.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:58:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:13.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:14 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v397: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 05:58:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:15 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5e0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095815 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 05:58:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:15 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5dc001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:58:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:15.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:58:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:58:15] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  8 05:58:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:58:15] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  8 05:58:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:58:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:15.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:58:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:16 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v398: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 05:58:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:17.018Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:58:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:17 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:17 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5e0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:17.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:58:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:58:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:58:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:58:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:17.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:58:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:58:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:58:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:58:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:18 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5dc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:18 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v399: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 05:58:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:58:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:18.894Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:58:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:19 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:19 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:58:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:19.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:58:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:19.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:20 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5e0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:20 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v400: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 05:58:20 np0005475493 systemd[1]: Stopping OpenSSH server daemon...
Oct  8 05:58:20 np0005475493 systemd[1]: sshd.service: Deactivated successfully.
Oct  8 05:58:20 np0005475493 systemd[1]: Stopped OpenSSH server daemon.
Oct  8 05:58:20 np0005475493 systemd[1]: sshd.service: Consumed 2.334s CPU time, no IO.
Oct  8 05:58:20 np0005475493 systemd[1]: Stopped target sshd-keygen.target.
Oct  8 05:58:20 np0005475493 systemd[1]: Stopping sshd-keygen.target...
Oct  8 05:58:20 np0005475493 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  8 05:58:20 np0005475493 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  8 05:58:20 np0005475493 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  8 05:58:20 np0005475493 systemd[1]: Reached target sshd-keygen.target.
Oct  8 05:58:20 np0005475493 systemd[1]: Starting OpenSSH server daemon...
Oct  8 05:58:20 np0005475493 systemd[1]: Started OpenSSH server daemon.
Oct  8 05:58:20 np0005475493 podman[189667]: 2025-10-08 09:58:20.783290798 +0000 UTC m=+0.090395316 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Oct  8 05:58:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:21 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5dc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:21 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:58:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:21.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:58:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:21.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:22 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:22 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v401: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct  8 05:58:22 np0005475493 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  8 05:58:22 np0005475493 systemd[1]: Starting man-db-cache-update.service...
Oct  8 05:58:22 np0005475493 systemd[1]: Reloading.
Oct  8 05:58:22 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:58:22 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:58:23 np0005475493 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  8 05:58:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:58:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:23 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5e00091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:23 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5e00091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:23.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:23.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:24 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5dc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:24 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v402: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct  8 05:58:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct  8 05:58:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  8 05:58:25 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  8 05:58:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:25 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:25 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:58:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:25.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:58:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:58:25] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct  8 05:58:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:58:25] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct  8 05:58:25 np0005475493 systemd[1]: Starting PackageKit Daemon...
Oct  8 05:58:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 05:58:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:25.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:58:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 05:58:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:58:26 np0005475493 systemd[1]: Started PackageKit Daemon.
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:58:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:26 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5e00091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v403: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:58:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v404: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 289 B/s rd, 0 op/s
Oct  8 05:58:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v405: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 357 B/s rd, 0 op/s
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:58:26 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:58:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:27.018Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:58:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:27.019Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:58:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:27.019Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:58:27 np0005475493 podman[195091]: 2025-10-08 09:58:27.393717146 +0000 UTC m=+0.046271199 container create 3d3bfe3de7e31e2a79a27265dcb94e5c948092b653a464faf54b22b96d4acbb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_cerf, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:58:27 np0005475493 systemd[1]: Started libpod-conmon-3d3bfe3de7e31e2a79a27265dcb94e5c948092b653a464faf54b22b96d4acbb5.scope.
Oct  8 05:58:27 np0005475493 podman[195091]: 2025-10-08 09:58:27.36783566 +0000 UTC m=+0.020389733 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:58:27 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:58:27 np0005475493 podman[195091]: 2025-10-08 09:58:27.499162357 +0000 UTC m=+0.151716440 container init 3d3bfe3de7e31e2a79a27265dcb94e5c948092b653a464faf54b22b96d4acbb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:58:27 np0005475493 podman[195091]: 2025-10-08 09:58:27.51062964 +0000 UTC m=+0.163183693 container start 3d3bfe3de7e31e2a79a27265dcb94e5c948092b653a464faf54b22b96d4acbb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_cerf, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  8 05:58:27 np0005475493 podman[195091]: 2025-10-08 09:58:27.514833191 +0000 UTC m=+0.167387244 container attach 3d3bfe3de7e31e2a79a27265dcb94e5c948092b653a464faf54b22b96d4acbb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_cerf, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct  8 05:58:27 np0005475493 naughty_cerf[195214]: 167 167
Oct  8 05:58:27 np0005475493 systemd[1]: libpod-3d3bfe3de7e31e2a79a27265dcb94e5c948092b653a464faf54b22b96d4acbb5.scope: Deactivated successfully.
Oct  8 05:58:27 np0005475493 podman[195091]: 2025-10-08 09:58:27.518233495 +0000 UTC m=+0.170787548 container died 3d3bfe3de7e31e2a79a27265dcb94e5c948092b653a464faf54b22b96d4acbb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_cerf, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  8 05:58:27 np0005475493 systemd[1]: var-lib-containers-storage-overlay-945e9ec094fb81f97c58ae3dcf2bc92dce55e919f9162e694cccd46e9a78c4ea-merged.mount: Deactivated successfully.
Oct  8 05:58:27 np0005475493 podman[195091]: 2025-10-08 09:58:27.558881796 +0000 UTC m=+0.211435849 container remove 3d3bfe3de7e31e2a79a27265dcb94e5c948092b653a464faf54b22b96d4acbb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_cerf, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct  8 05:58:27 np0005475493 systemd[1]: libpod-conmon-3d3bfe3de7e31e2a79a27265dcb94e5c948092b653a464faf54b22b96d4acbb5.scope: Deactivated successfully.
Oct  8 05:58:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:27 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:27 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:27.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:27 np0005475493 podman[195455]: 2025-10-08 09:58:27.713241473 +0000 UTC m=+0.041940605 container create 7c2bd0c54da567afc1bc29d52bf49ebf106f0770730176bf0ae912ca2013617d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_kalam, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:58:27 np0005475493 systemd[1]: Started libpod-conmon-7c2bd0c54da567afc1bc29d52bf49ebf106f0770730176bf0ae912ca2013617d.scope.
Oct  8 05:58:27 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:58:27 np0005475493 podman[195455]: 2025-10-08 09:58:27.694961132 +0000 UTC m=+0.023660284 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:58:27 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07df87e4161525e863ab83fceab3740d31b16ad1475a19cfee358fb0e1354de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:58:27 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07df87e4161525e863ab83fceab3740d31b16ad1475a19cfee358fb0e1354de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:58:27 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07df87e4161525e863ab83fceab3740d31b16ad1475a19cfee358fb0e1354de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:58:27 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07df87e4161525e863ab83fceab3740d31b16ad1475a19cfee358fb0e1354de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:58:27 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07df87e4161525e863ab83fceab3740d31b16ad1475a19cfee358fb0e1354de/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:58:27 np0005475493 podman[195455]: 2025-10-08 09:58:27.807740567 +0000 UTC m=+0.136439729 container init 7c2bd0c54da567afc1bc29d52bf49ebf106f0770730176bf0ae912ca2013617d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_kalam, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct  8 05:58:27 np0005475493 podman[195455]: 2025-10-08 09:58:27.818359483 +0000 UTC m=+0.147058615 container start 7c2bd0c54da567afc1bc29d52bf49ebf106f0770730176bf0ae912ca2013617d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_kalam, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  8 05:58:27 np0005475493 podman[195455]: 2025-10-08 09:58:27.821829418 +0000 UTC m=+0.150528550 container attach 7c2bd0c54da567afc1bc29d52bf49ebf106f0770730176bf0ae912ca2013617d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:58:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:27.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:28 np0005475493 thirsty_kalam[195582]: --> passed data devices: 0 physical, 1 LVM
Oct  8 05:58:28 np0005475493 thirsty_kalam[195582]: --> All data devices are unavailable
Oct  8 05:58:28 np0005475493 systemd[1]: libpod-7c2bd0c54da567afc1bc29d52bf49ebf106f0770730176bf0ae912ca2013617d.scope: Deactivated successfully.
Oct  8 05:58:28 np0005475493 podman[195455]: 2025-10-08 09:58:28.16777712 +0000 UTC m=+0.496476252 container died 7c2bd0c54da567afc1bc29d52bf49ebf106f0770730176bf0ae912ca2013617d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_kalam, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  8 05:58:28 np0005475493 systemd[1]: var-lib-containers-storage-overlay-e07df87e4161525e863ab83fceab3740d31b16ad1475a19cfee358fb0e1354de-merged.mount: Deactivated successfully.
Oct  8 05:58:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:28 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:28 np0005475493 podman[195455]: 2025-10-08 09:58:28.213543702 +0000 UTC m=+0.542242834 container remove 7c2bd0c54da567afc1bc29d52bf49ebf106f0770730176bf0ae912ca2013617d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  8 05:58:28 np0005475493 systemd[1]: libpod-conmon-7c2bd0c54da567afc1bc29d52bf49ebf106f0770730176bf0ae912ca2013617d.scope: Deactivated successfully.
Oct  8 05:58:28 np0005475493 podman[196034]: 2025-10-08 09:58:28.286854456 +0000 UTC m=+0.084470089 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  8 05:58:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:58:28 np0005475493 podman[196715]: 2025-10-08 09:58:28.762881672 +0000 UTC m=+0.039623938 container create 3bee25543bd2715194f6dd35bf3e87bb63f88f7424065684fe313163b8368375 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mendel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  8 05:58:28 np0005475493 systemd[1]: Started libpod-conmon-3bee25543bd2715194f6dd35bf3e87bb63f88f7424065684fe313163b8368375.scope.
Oct  8 05:58:28 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:58:28 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v406: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 357 B/s rd, 0 op/s
Oct  8 05:58:28 np0005475493 podman[196715]: 2025-10-08 09:58:28.747119005 +0000 UTC m=+0.023861301 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:58:28 np0005475493 podman[196715]: 2025-10-08 09:58:28.84316877 +0000 UTC m=+0.119911086 container init 3bee25543bd2715194f6dd35bf3e87bb63f88f7424065684fe313163b8368375 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:58:28 np0005475493 podman[196715]: 2025-10-08 09:58:28.852477801 +0000 UTC m=+0.129220077 container start 3bee25543bd2715194f6dd35bf3e87bb63f88f7424065684fe313163b8368375 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:58:28 np0005475493 podman[196715]: 2025-10-08 09:58:28.85631807 +0000 UTC m=+0.133060346 container attach 3bee25543bd2715194f6dd35bf3e87bb63f88f7424065684fe313163b8368375 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mendel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  8 05:58:28 np0005475493 sleepy_mendel[196837]: 167 167
Oct  8 05:58:28 np0005475493 systemd[1]: libpod-3bee25543bd2715194f6dd35bf3e87bb63f88f7424065684fe313163b8368375.scope: Deactivated successfully.
Oct  8 05:58:28 np0005475493 podman[196715]: 2025-10-08 09:58:28.860527031 +0000 UTC m=+0.137269307 container died 3bee25543bd2715194f6dd35bf3e87bb63f88f7424065684fe313163b8368375 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mendel, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  8 05:58:28 np0005475493 systemd[1]: var-lib-containers-storage-overlay-c797247a03231c0e657b2892cb8213a83b9fdf90626c3e35341724dba78c53a7-merged.mount: Deactivated successfully.
Oct  8 05:58:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:28.894Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:58:28 np0005475493 podman[196715]: 2025-10-08 09:58:28.903005483 +0000 UTC m=+0.179747759 container remove 3bee25543bd2715194f6dd35bf3e87bb63f88f7424065684fe313163b8368375 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mendel, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  8 05:58:28 np0005475493 systemd[1]: libpod-conmon-3bee25543bd2715194f6dd35bf3e87bb63f88f7424065684fe313163b8368375.scope: Deactivated successfully.
Oct  8 05:58:29 np0005475493 podman[197056]: 2025-10-08 09:58:29.071303967 +0000 UTC m=+0.046923791 container create 7cd113447c1a6aa894d6feacdf2cec825fde5f943ed817b5f39ba2c470ab5c96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_tu, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:58:29 np0005475493 systemd[1]: Started libpod-conmon-7cd113447c1a6aa894d6feacdf2cec825fde5f943ed817b5f39ba2c470ab5c96.scope.
Oct  8 05:58:29 np0005475493 podman[197056]: 2025-10-08 09:58:29.052209648 +0000 UTC m=+0.027829502 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:58:29 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:58:29 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9443b38c27974b89d2a4c3995b3f2ee046b0ce8c5234fcd8f543a88459880e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:58:29 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9443b38c27974b89d2a4c3995b3f2ee046b0ce8c5234fcd8f543a88459880e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:58:29 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9443b38c27974b89d2a4c3995b3f2ee046b0ce8c5234fcd8f543a88459880e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:58:29 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9443b38c27974b89d2a4c3995b3f2ee046b0ce8c5234fcd8f543a88459880e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:58:29 np0005475493 podman[197056]: 2025-10-08 09:58:29.181541478 +0000 UTC m=+0.157161322 container init 7cd113447c1a6aa894d6feacdf2cec825fde5f943ed817b5f39ba2c470ab5c96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_tu, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct  8 05:58:29 np0005475493 podman[197056]: 2025-10-08 09:58:29.189498954 +0000 UTC m=+0.165118778 container start 7cd113447c1a6aa894d6feacdf2cec825fde5f943ed817b5f39ba2c470ab5c96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_tu, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:58:29 np0005475493 podman[197056]: 2025-10-08 09:58:29.196096745 +0000 UTC m=+0.171716589 container attach 7cd113447c1a6aa894d6feacdf2cec825fde5f943ed817b5f39ba2c470ab5c96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_tu, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]: {
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:    "1": [
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:        {
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:            "devices": [
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:                "/dev/loop3"
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:            ],
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:            "lv_name": "ceph_lv0",
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:            "lv_size": "21470642176",
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:            "name": "ceph_lv0",
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:            "tags": {
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:                "ceph.cephx_lockbox_secret": "",
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:                "ceph.cluster_name": "ceph",
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:                "ceph.crush_device_class": "",
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:                "ceph.encrypted": "0",
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:                "ceph.osd_id": "1",
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:                "ceph.type": "block",
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:                "ceph.vdo": "0",
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:                "ceph.with_tpm": "0"
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:            },
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:            "type": "block",
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:            "vg_name": "ceph_vg0"
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:        }
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]:    ]
Oct  8 05:58:29 np0005475493 relaxed_tu[197167]: }
Oct  8 05:58:29 np0005475493 systemd[1]: libpod-7cd113447c1a6aa894d6feacdf2cec825fde5f943ed817b5f39ba2c470ab5c96.scope: Deactivated successfully.
Oct  8 05:58:29 np0005475493 podman[197056]: 2025-10-08 09:58:29.491164453 +0000 UTC m=+0.466784297 container died 7cd113447c1a6aa894d6feacdf2cec825fde5f943ed817b5f39ba2c470ab5c96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_tu, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:58:29 np0005475493 systemd[1]: var-lib-containers-storage-overlay-bc9443b38c27974b89d2a4c3995b3f2ee046b0ce8c5234fcd8f543a88459880e-merged.mount: Deactivated successfully.
Oct  8 05:58:29 np0005475493 podman[197056]: 2025-10-08 09:58:29.530935935 +0000 UTC m=+0.506555759 container remove 7cd113447c1a6aa894d6feacdf2cec825fde5f943ed817b5f39ba2c470ab5c96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_tu, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  8 05:58:29 np0005475493 systemd[1]: libpod-conmon-7cd113447c1a6aa894d6feacdf2cec825fde5f943ed817b5f39ba2c470ab5c96.scope: Deactivated successfully.
Oct  8 05:58:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:29 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:29 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5e000a2b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:58:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:29.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:58:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:29.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:30 np0005475493 podman[198158]: 2025-10-08 09:58:30.120432809 +0000 UTC m=+0.086179466 container create 961206e716d6737cbe316362c0b7f2ad9f416c0c9a32da4c758b548a1251c0fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_yonath, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct  8 05:58:30 np0005475493 podman[198158]: 2025-10-08 09:58:30.05684477 +0000 UTC m=+0.022591457 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:58:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:30 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:30 np0005475493 systemd[1]: Started libpod-conmon-961206e716d6737cbe316362c0b7f2ad9f416c0c9a32da4c758b548a1251c0fe.scope.
Oct  8 05:58:30 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:58:30 np0005475493 podman[198158]: 2025-10-08 09:58:30.328772714 +0000 UTC m=+0.294519401 container init 961206e716d6737cbe316362c0b7f2ad9f416c0c9a32da4c758b548a1251c0fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_yonath, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:58:30 np0005475493 podman[198158]: 2025-10-08 09:58:30.335878751 +0000 UTC m=+0.301625408 container start 961206e716d6737cbe316362c0b7f2ad9f416c0c9a32da4c758b548a1251c0fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_yonath, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Oct  8 05:58:30 np0005475493 friendly_yonath[198353]: 167 167
Oct  8 05:58:30 np0005475493 systemd[1]: libpod-961206e716d6737cbe316362c0b7f2ad9f416c0c9a32da4c758b548a1251c0fe.scope: Deactivated successfully.
Oct  8 05:58:30 np0005475493 podman[198158]: 2025-10-08 09:58:30.414260065 +0000 UTC m=+0.380006712 container attach 961206e716d6737cbe316362c0b7f2ad9f416c0c9a32da4c758b548a1251c0fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_yonath, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  8 05:58:30 np0005475493 podman[198158]: 2025-10-08 09:58:30.414745542 +0000 UTC m=+0.380492199 container died 961206e716d6737cbe316362c0b7f2ad9f416c0c9a32da4c758b548a1251c0fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_yonath, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  8 05:58:30 np0005475493 systemd[1]: var-lib-containers-storage-overlay-f9667a434889fdd8d7c98b4045100b943b329c2ab94bca1452346f6048d6b64f-merged.mount: Deactivated successfully.
Oct  8 05:58:30 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v407: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 357 B/s rd, 0 op/s
Oct  8 05:58:30 np0005475493 podman[198158]: 2025-10-08 09:58:30.958115302 +0000 UTC m=+0.923861959 container remove 961206e716d6737cbe316362c0b7f2ad9f416c0c9a32da4c758b548a1251c0fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:58:30 np0005475493 systemd[1]: libpod-conmon-961206e716d6737cbe316362c0b7f2ad9f416c0c9a32da4c758b548a1251c0fe.scope: Deactivated successfully.
Oct  8 05:58:31 np0005475493 podman[198782]: 2025-10-08 09:58:31.146480088 +0000 UTC m=+0.069168057 container create 476ecd1c11280460b19ac851e8702b97ee97582e07e56e8ee735b9d22976f70f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_cannon, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct  8 05:58:31 np0005475493 podman[198782]: 2025-10-08 09:58:31.097742786 +0000 UTC m=+0.020430775 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:58:31 np0005475493 systemd[1]: Started libpod-conmon-476ecd1c11280460b19ac851e8702b97ee97582e07e56e8ee735b9d22976f70f.scope.
Oct  8 05:58:31 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:58:31 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8739d9ba8f464f3b1cf883596341984f23a7de90259ed0892401f2959585c88f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:58:31 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8739d9ba8f464f3b1cf883596341984f23a7de90259ed0892401f2959585c88f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:58:31 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8739d9ba8f464f3b1cf883596341984f23a7de90259ed0892401f2959585c88f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:58:31 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8739d9ba8f464f3b1cf883596341984f23a7de90259ed0892401f2959585c88f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:58:31 np0005475493 podman[198782]: 2025-10-08 09:58:31.311072958 +0000 UTC m=+0.233760957 container init 476ecd1c11280460b19ac851e8702b97ee97582e07e56e8ee735b9d22976f70f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:58:31 np0005475493 podman[198782]: 2025-10-08 09:58:31.321097534 +0000 UTC m=+0.243785503 container start 476ecd1c11280460b19ac851e8702b97ee97582e07e56e8ee735b9d22976f70f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:58:31 np0005475493 podman[198782]: 2025-10-08 09:58:31.362606784 +0000 UTC m=+0.285294803 container attach 476ecd1c11280460b19ac851e8702b97ee97582e07e56e8ee735b9d22976f70f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_cannon, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:58:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:31 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:31 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:31.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:31.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:31 np0005475493 lvm[198921]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 05:58:31 np0005475493 lvm[198921]: VG ceph_vg0 finished
Oct  8 05:58:32 np0005475493 serene_cannon[198846]: {}
Oct  8 05:58:32 np0005475493 systemd[1]: libpod-476ecd1c11280460b19ac851e8702b97ee97582e07e56e8ee735b9d22976f70f.scope: Deactivated successfully.
Oct  8 05:58:32 np0005475493 systemd[1]: libpod-476ecd1c11280460b19ac851e8702b97ee97582e07e56e8ee735b9d22976f70f.scope: Consumed 1.237s CPU time.
Oct  8 05:58:32 np0005475493 podman[198782]: 2025-10-08 09:58:32.088896878 +0000 UTC m=+1.011584857 container died 476ecd1c11280460b19ac851e8702b97ee97582e07e56e8ee735b9d22976f70f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_cannon, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True)
Oct  8 05:58:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:32 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5dc0034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:32 np0005475493 systemd[1]: var-lib-containers-storage-overlay-8739d9ba8f464f3b1cf883596341984f23a7de90259ed0892401f2959585c88f-merged.mount: Deactivated successfully.
Oct  8 05:58:32 np0005475493 podman[198782]: 2025-10-08 09:58:32.467243774 +0000 UTC m=+1.389931763 container remove 476ecd1c11280460b19ac851e8702b97ee97582e07e56e8ee735b9d22976f70f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_cannon, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:58:32 np0005475493 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  8 05:58:32 np0005475493 systemd[1]: Finished man-db-cache-update.service.
Oct  8 05:58:32 np0005475493 systemd[1]: man-db-cache-update.service: Consumed 10.163s CPU time.
Oct  8 05:58:32 np0005475493 systemd[1]: run-r7106b39303f8423ab15aadc2e42f0a32.service: Deactivated successfully.
Oct  8 05:58:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:58:32 np0005475493 systemd[1]: libpod-conmon-476ecd1c11280460b19ac851e8702b97ee97582e07e56e8ee735b9d22976f70f.scope: Deactivated successfully.
Oct  8 05:58:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:58:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:58:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v408: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct  8 05:58:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:58:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:58:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:58:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:58:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:33 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5dc0034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:33 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5dc0034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:58:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:33.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:58:33 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:58:33 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:58:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:58:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:33.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:58:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:34 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5dc0034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:34 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v409: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 357 B/s rd, 0 op/s
Oct  8 05:58:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:35 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:35 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.002000067s ======
Oct  8 05:58:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:35.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000067s
Oct  8 05:58:35 np0005475493 python3.9[199092]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  8 05:58:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:58:35] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 05:58:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:58:35] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 05:58:35 np0005475493 systemd[1]: Reloading.
Oct  8 05:58:35 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:58:35 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:58:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:35.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:36 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5dc0034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:36 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v410: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Oct  8 05:58:36 np0005475493 python3.9[199283]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  8 05:58:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:37.020Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:58:37 np0005475493 systemd[1]: Reloading.
Oct  8 05:58:37 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:58:37 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:58:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:37 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:37 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:37.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:37.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:38 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:38 np0005475493 python3.9[199474]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  8 05:58:38 np0005475493 systemd[1]: Reloading.
Oct  8 05:58:38 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:58:38 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:58:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:58:38 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v411: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:58:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:38.896Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:58:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:38.897Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:58:39 np0005475493 python3.9[199665]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  8 05:58:39 np0005475493 systemd[1]: Reloading.
Oct  8 05:58:39 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:58:39 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:58:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:39 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5dc0034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:39 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c40023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:39.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:39.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:40 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:40 np0005475493 python3.9[199857]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  8 05:58:40 np0005475493 systemd[1]: Reloading.
Oct  8 05:58:40 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:58:40 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:58:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v412: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:58:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:41 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:41 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5dc0034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:41 np0005475493 python3.9[200048]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  8 05:58:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:58:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:41.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:58:41 np0005475493 systemd[1]: Reloading.
Oct  8 05:58:41 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:58:41 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:58:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:41.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:42 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c40023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:42 np0005475493 python3.9[200239]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  8 05:58:42 np0005475493 systemd[1]: Reloading.
Oct  8 05:58:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v413: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:58:42 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:58:42 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:58:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:58:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:43 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:43 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b80032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:58:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:43.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:58:43 np0005475493 python3.9[200455]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  8 05:58:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:58:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:43.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:58:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:44 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5dc0034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:44 np0005475493 python3.9[200611]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  8 05:58:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095844 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 05:58:44 np0005475493 systemd[1]: Reloading.
Oct  8 05:58:44 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:58:44 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:58:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v414: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct  8 05:58:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:45 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c40023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:45 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct  8 05:58:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:45.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct  8 05:58:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:58:45] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 05:58:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:58:45] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 05:58:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:45.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:46 np0005475493 python3.9[200803]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  8 05:58:46 np0005475493 systemd[1]: Reloading.
Oct  8 05:58:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:46 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:46 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:58:46 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:58:46 np0005475493 systemd[1]: Listening on libvirt proxy daemon socket.
Oct  8 05:58:46 np0005475493 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct  8 05:58:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v415: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 05:58:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:47.022Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:58:47 np0005475493 python3.9[200999]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:58:47
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['.mgr', 'volumes', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'backups', '.nfs', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'vms', 'default.rgw.control']
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 05:58:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:47 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b80032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:47 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct  8 05:58:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:47.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 05:58:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:58:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:58:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:58:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:48.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:58:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:58:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:58:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:58:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 05:58:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:58:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:58:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:58:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:58:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:48 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:48 np0005475493 python3.9[201155]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  8 05:58:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:58:48 np0005475493 auditd[703]: Audit daemon rotating log files
Oct  8 05:58:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v416: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 05:58:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:48.898Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:58:49 np0005475493 python3.9[201310]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  8 05:58:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:49 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:49 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:49.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:58:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:50.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:58:50 np0005475493 python3.9[201466]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  8 05:58:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:50 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:50 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v417: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 05:58:50 np0005475493 python3.9[201622]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  8 05:58:50 np0005475493 podman[201623]: 2025-10-08 09:58:50.973190426 +0000 UTC m=+0.121526391 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  8 05:58:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:51 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:51 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:51 np0005475493 python3.9[201804]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  8 05:58:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:58:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:51.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:58:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct  8 05:58:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:52.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct  8 05:58:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:52 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:52 np0005475493 python3.9[201960]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  8 05:58:52 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v418: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 05:58:53 np0005475493 python3.9[202116]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  8 05:58:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:58:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:53 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:53 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:53.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:54.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:54 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:54 np0005475493 python3.9[202272]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  8 05:58:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:54 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:58:54 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v419: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct  8 05:58:55 np0005475493 python3.9[202427]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  8 05:58:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:55 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:55 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:58:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:55.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:58:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:58:55] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Oct  8 05:58:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:58:55] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Oct  8 05:58:55 np0005475493 python3.9[202583]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  8 05:58:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:56.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:56 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:56 np0005475493 python3.9[202739]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  8 05:58:56 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v420: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct  8 05:58:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:57.025Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:58:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:58:57.397 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 05:58:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:58:57.397 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 05:58:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:58:57.397 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 05:58:57 np0005475493 python3.9[202895]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  8 05:58:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:57 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:57 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 05:58:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:57 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:58:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:57 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct  8 05:58:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:57.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct  8 05:58:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:58:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:58:58.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:58:58 np0005475493 python3.9[203050]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  8 05:58:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:58 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:58:58 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v421: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 05:58:58 np0005475493 podman[203079]: 2025-10-08 09:58:58.898824626 +0000 UTC m=+0.055187271 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct  8 05:58:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:58:58.900Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:58:59 np0005475493 python3.9[203227]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:58:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:59 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:58:59 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:58:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:58:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct  8 05:58:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:58:59.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct  8 05:59:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:00.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:00 np0005475493 python3.9[203379]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:59:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:00 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:00 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 05:59:00 np0005475493 python3.9[203532]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:59:00 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v422: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 05:59:01 np0005475493 python3.9[203685]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:59:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:01 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:01 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:01.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:59:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:02.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:59:02 np0005475493 python3.9[203839]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:59:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:02 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:02 np0005475493 python3.9[203992]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  8 05:59:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:59:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:59:02 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v423: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 937 B/s wr, 3 op/s
Oct  8 05:59:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:59:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:03 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:03 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:03 np0005475493 python3.9[204170]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:59:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:03.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:04.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:04 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:04 np0005475493 python3.9[204296]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759917543.0660093-1622-220423562716082/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:04 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v424: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1022 B/s wr, 3 op/s
Oct  8 05:59:05 np0005475493 python3.9[204448]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:59:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:05 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:05 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:05 np0005475493 python3.9[204574]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759917544.5575209-1622-71640988957292/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:59:05] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  8 05:59:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:59:05] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  8 05:59:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:05.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:06.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:06 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:06 np0005475493 python3.9[204727]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:59:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095906 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 05:59:06 np0005475493 python3.9[204852]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759917545.8827927-1622-81557711266192/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:06 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v425: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 937 B/s wr, 2 op/s
Oct  8 05:59:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:07.026Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:59:07 np0005475493 python3.9[205005]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:59:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:07 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:07 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:07.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:08 np0005475493 python3.9[205130]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759917547.0086787-1622-124114604733317/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:08.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:08 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:59:08 np0005475493 python3.9[205283]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:59:08 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v426: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 937 B/s wr, 2 op/s
Oct  8 05:59:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:08.901Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:59:09 np0005475493 python3.9[205409]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759917548.1934218-1622-238163493429264/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:09 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:09 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5d4002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:09.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:09 np0005475493 python3.9[205561]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:59:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:10.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:10 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:10 np0005475493 python3.9[205687]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759917549.479857-1622-261542011601556/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:10 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v427: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct  8 05:59:11 np0005475493 python3.9[205839]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:59:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:11 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:11 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct  8 05:59:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:11.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct  8 05:59:11 np0005475493 python3.9[205963]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759917550.6971312-1622-205710504951775/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:12.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:12 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:12 np0005475493 python3.9[206116]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:59:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v428: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct  8 05:59:12 np0005475493 python3.9[206241]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759917551.9534554-1622-124437164045017/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:59:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:13 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:13.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:14 np0005475493 python3.9[206394]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct  8 05:59:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:14.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:14 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:14 np0005475493 python3.9[206548]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v429: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct  8 05:59:15 np0005475493 python3.9[206701]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:15 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:15 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:59:15] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  8 05:59:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:59:15] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  8 05:59:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:15.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:16.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:16 np0005475493 python3.9[206854]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:16 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5ac000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:16 np0005475493 python3.9[207006]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v430: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:59:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:17.026Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:59:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:17.027Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:59:17 np0005475493 python3.9[207159]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:17 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:17 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8004180 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:17.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:59:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:59:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:59:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:59:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 05:59:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:18.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 05:59:18 np0005475493 python3.9[207311]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:59:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:59:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:59:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:59:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:18 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:59:18 np0005475493 python3.9[207464]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:18 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v431: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:59:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:18.902Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:59:19 np0005475493 python3.9[207617]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:19 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5ac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:19 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc0039b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:19.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:20.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:20 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8004180 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:20 np0005475493 python3.9[207770]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:20 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v432: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:59:20 np0005475493 python3.9[207922]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:21 np0005475493 podman[208047]: 2025-10-08 09:59:21.474716554 +0000 UTC m=+0.165396801 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  8 05:59:21 np0005475493 python3.9[208096]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:21 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:21 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5ac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:21.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:22.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:22 np0005475493 python3.9[208256]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:22 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc0039b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:22 np0005475493 python3.9[208408]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:22 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v433: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:59:23 np0005475493 python3.9[208561]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:59:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:23 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8004320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:23 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct  8 05:59:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:23.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct  8 05:59:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:24.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:24 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5ac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:24 np0005475493 python3.9[208739]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:59:24 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v434: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct  8 05:59:25 np0005475493 python3.9[208863]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917564.4075418-2285-225649093465777/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:25 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5ac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:25 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:59:25] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 05:59:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:59:25] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 05:59:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:25.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:26 np0005475493 python3.9[209015]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:59:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:26.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:26 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:26 np0005475493 python3.9[209139]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917565.5929344-2285-156920356207424/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v435: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:59:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:27.028Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:59:27 np0005475493 python3.9[209291]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:59:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:27 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5ac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:27 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5ac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:27.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:27 np0005475493 python3.9[209415]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917566.744582-2285-59249924568419/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:28.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:28 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:59:28 np0005475493 python3.9[209568]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:59:28 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v436: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:59:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:28.904Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:59:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:28.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:59:29 np0005475493 podman[209663]: 2025-10-08 09:59:29.104875211 +0000 UTC m=+0.065119330 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  8 05:59:29 np0005475493 python3.9[209709]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917568.1722934-2285-261852975168970/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:29 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:29 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:29.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:29 np0005475493 python3.9[209861]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:59:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:30.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:30 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:30 np0005475493 python3.9[209985]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917569.5138454-2285-250338820494429/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:30 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v437: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:59:31 np0005475493 python3.9[210137]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:59:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:31 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5b8004380 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:31 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc003cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct  8 05:59:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:31.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct  8 05:59:31 np0005475493 python3.9[210261]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917570.6729443-2285-278591622654645/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:32.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:32 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5bc003cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 05:59:32 np0005475493 python3.9[210414]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:59:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:59:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:59:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v438: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 05:59:33 np0005475493 python3.9[210537]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917572.085847-2285-221965350089793/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:59:33 np0005475493 kernel: ganesha.nfsd[188752]: segfault at 50 ip 00007ff68c83632e sp 00007ff640ff8210 error 4 in libntirpc.so.5.8[7ff68c81b000+2c000] likely on CPU 5 (core 0, socket 5)
Oct  8 05:59:33 np0005475493 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct  8 05:59:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[188619]: 08/10/2025 09:59:33 : epoch 68e635a8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff5c4003730 fd 38 proxy ignored for local
Oct  8 05:59:33 np0005475493 systemd[1]: Started Process Core Dump (PID 210691/UID 0).
Oct  8 05:59:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:33.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:33 np0005475493 python3.9[210690]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:59:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:34.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:34 np0005475493 python3.9[210866]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917573.3078132-2285-74431260311386/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:59:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:59:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 05:59:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:59:34 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v439: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 519 B/s rd, 0 op/s
Oct  8 05:59:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 05:59:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:59:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 05:59:34 np0005475493 systemd-coredump[210692]: Process 188623 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 53:#012#0  0x00007ff68c83632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct  8 05:59:34 np0005475493 systemd[1]: systemd-coredump@5-210691-0.service: Deactivated successfully.
Oct  8 05:59:34 np0005475493 systemd[1]: systemd-coredump@5-210691-0.service: Consumed 1.116s CPU time.
Oct  8 05:59:34 np0005475493 podman[211054]: 2025-10-08 09:59:34.947707761 +0000 UTC m=+0.024526560 container died c7ddf9eb043b2f4319271f9f52568ea96e1aa7b542b5cb278857b11c92e1ddaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:59:34 np0005475493 systemd[1]: var-lib-containers-storage-overlay-b305198b3d8efc9db631e903d993aee68b48b62f795ac868089a526f78c5ea29-merged.mount: Deactivated successfully.
Oct  8 05:59:35 np0005475493 podman[211054]: 2025-10-08 09:59:35.009754485 +0000 UTC m=+0.086573274 container remove c7ddf9eb043b2f4319271f9f52568ea96e1aa7b542b5cb278857b11c92e1ddaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  8 05:59:35 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Main process exited, code=exited, status=139/n/a
Oct  8 05:59:35 np0005475493 python3.9[211050]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:59:35 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Failed with result 'exit-code'.
Oct  8 05:59:35 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.494s CPU time.
Oct  8 05:59:35 np0005475493 python3.9[211220]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917574.5159693-2285-247550885788487/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:35 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:59:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 05:59:35 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 05:59:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 05:59:35 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:59:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 05:59:35 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 05:59:35 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 05:59:35 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:59:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:59:35] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct  8 05:59:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:59:35] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct  8 05:59:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct  8 05:59:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:35.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct  8 05:59:36 np0005475493 podman[211360]: 2025-10-08 09:59:36.030923992 +0000 UTC m=+0.039120280 container create 3376d35931367c07541de03e6bfb839360a3655e43b2e3d21245dbe3c08b9645 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct  8 05:59:36 np0005475493 systemd[1]: Started libpod-conmon-3376d35931367c07541de03e6bfb839360a3655e43b2e3d21245dbe3c08b9645.scope.
Oct  8 05:59:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:36.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:36 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:59:36 np0005475493 podman[211360]: 2025-10-08 09:59:36.109411388 +0000 UTC m=+0.117607696 container init 3376d35931367c07541de03e6bfb839360a3655e43b2e3d21245dbe3c08b9645 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_kowalevski, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  8 05:59:36 np0005475493 podman[211360]: 2025-10-08 09:59:36.015018388 +0000 UTC m=+0.023214696 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:59:36 np0005475493 podman[211360]: 2025-10-08 09:59:36.117457993 +0000 UTC m=+0.125654291 container start 3376d35931367c07541de03e6bfb839360a3655e43b2e3d21245dbe3c08b9645 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:59:36 np0005475493 podman[211360]: 2025-10-08 09:59:36.121110739 +0000 UTC m=+0.129307037 container attach 3376d35931367c07541de03e6bfb839360a3655e43b2e3d21245dbe3c08b9645 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_kowalevski, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:59:36 np0005475493 eager_kowalevski[211411]: 167 167
Oct  8 05:59:36 np0005475493 systemd[1]: libpod-3376d35931367c07541de03e6bfb839360a3655e43b2e3d21245dbe3c08b9645.scope: Deactivated successfully.
Oct  8 05:59:36 np0005475493 podman[211360]: 2025-10-08 09:59:36.122630231 +0000 UTC m=+0.130826599 container died 3376d35931367c07541de03e6bfb839360a3655e43b2e3d21245dbe3c08b9645 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  8 05:59:36 np0005475493 systemd[1]: var-lib-containers-storage-overlay-df96504db490188a969b67ef22ef20d59eb63be801a233d856527b73ba945a5a-merged.mount: Deactivated successfully.
Oct  8 05:59:36 np0005475493 podman[211360]: 2025-10-08 09:59:36.158651404 +0000 UTC m=+0.166847702 container remove 3376d35931367c07541de03e6bfb839360a3655e43b2e3d21245dbe3c08b9645 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:59:36 np0005475493 systemd[1]: libpod-conmon-3376d35931367c07541de03e6bfb839360a3655e43b2e3d21245dbe3c08b9645.scope: Deactivated successfully.
Oct  8 05:59:36 np0005475493 podman[211501]: 2025-10-08 09:59:36.330914479 +0000 UTC m=+0.041052896 container create a1b7827e38bf457f89ba03bc58521486dcf436d77aeb8a5d6c3d39ba91bfa8d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_maxwell, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:59:36 np0005475493 systemd[1]: Started libpod-conmon-a1b7827e38bf457f89ba03bc58521486dcf436d77aeb8a5d6c3d39ba91bfa8d9.scope.
Oct  8 05:59:36 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:59:36 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abd756db4e2d82f6fd502af173e5f4e0bdd72898fb66b556038947db4e5da9a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:59:36 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abd756db4e2d82f6fd502af173e5f4e0bdd72898fb66b556038947db4e5da9a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:59:36 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abd756db4e2d82f6fd502af173e5f4e0bdd72898fb66b556038947db4e5da9a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:59:36 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abd756db4e2d82f6fd502af173e5f4e0bdd72898fb66b556038947db4e5da9a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:59:36 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abd756db4e2d82f6fd502af173e5f4e0bdd72898fb66b556038947db4e5da9a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:59:36 np0005475493 podman[211501]: 2025-10-08 09:59:36.313826024 +0000 UTC m=+0.023964431 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:59:36 np0005475493 podman[211501]: 2025-10-08 09:59:36.414181698 +0000 UTC m=+0.124320135 container init a1b7827e38bf457f89ba03bc58521486dcf436d77aeb8a5d6c3d39ba91bfa8d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_maxwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  8 05:59:36 np0005475493 podman[211501]: 2025-10-08 09:59:36.427473514 +0000 UTC m=+0.137611921 container start a1b7827e38bf457f89ba03bc58521486dcf436d77aeb8a5d6c3d39ba91bfa8d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_maxwell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct  8 05:59:36 np0005475493 podman[211501]: 2025-10-08 09:59:36.447075155 +0000 UTC m=+0.157213592 container attach a1b7827e38bf457f89ba03bc58521486dcf436d77aeb8a5d6c3d39ba91bfa8d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  8 05:59:36 np0005475493 python3.9[211495]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:59:36 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v440: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 259 B/s rd, 0 op/s
Oct  8 05:59:36 np0005475493 recursing_maxwell[211518]: --> passed data devices: 0 physical, 1 LVM
Oct  8 05:59:36 np0005475493 recursing_maxwell[211518]: --> All data devices are unavailable
Oct  8 05:59:36 np0005475493 systemd[1]: libpod-a1b7827e38bf457f89ba03bc58521486dcf436d77aeb8a5d6c3d39ba91bfa8d9.scope: Deactivated successfully.
Oct  8 05:59:36 np0005475493 podman[211501]: 2025-10-08 09:59:36.77684905 +0000 UTC m=+0.486987457 container died a1b7827e38bf457f89ba03bc58521486dcf436d77aeb8a5d6c3d39ba91bfa8d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  8 05:59:36 np0005475493 systemd[1]: var-lib-containers-storage-overlay-abd756db4e2d82f6fd502af173e5f4e0bdd72898fb66b556038947db4e5da9a3-merged.mount: Deactivated successfully.
Oct  8 05:59:36 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:59:36 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 05:59:36 np0005475493 podman[211501]: 2025-10-08 09:59:36.830460095 +0000 UTC m=+0.540598502 container remove a1b7827e38bf457f89ba03bc58521486dcf436d77aeb8a5d6c3d39ba91bfa8d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_maxwell, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:59:36 np0005475493 systemd[1]: libpod-conmon-a1b7827e38bf457f89ba03bc58521486dcf436d77aeb8a5d6c3d39ba91bfa8d9.scope: Deactivated successfully.
Oct  8 05:59:36 np0005475493 python3.9[211666]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917575.96791-2285-113090132573873/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:37.029Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 05:59:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:37.033Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:59:37 np0005475493 podman[211842]: 2025-10-08 09:59:37.382962913 +0000 UTC m=+0.050634594 container create 7c35dfd7b61f9ce25f954ff181c007b0a2c62b0294a23f74b91357dbbaa13b08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_golick, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  8 05:59:37 np0005475493 systemd[1]: Started libpod-conmon-7c35dfd7b61f9ce25f954ff181c007b0a2c62b0294a23f74b91357dbbaa13b08.scope.
Oct  8 05:59:37 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:59:37 np0005475493 podman[211842]: 2025-10-08 09:59:37.449404398 +0000 UTC m=+0.117076089 container init 7c35dfd7b61f9ce25f954ff181c007b0a2c62b0294a23f74b91357dbbaa13b08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  8 05:59:37 np0005475493 podman[211842]: 2025-10-08 09:59:37.361167207 +0000 UTC m=+0.028838978 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:59:37 np0005475493 podman[211842]: 2025-10-08 09:59:37.455201046 +0000 UTC m=+0.122872717 container start 7c35dfd7b61f9ce25f954ff181c007b0a2c62b0294a23f74b91357dbbaa13b08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:59:37 np0005475493 podman[211842]: 2025-10-08 09:59:37.458136695 +0000 UTC m=+0.125808366 container attach 7c35dfd7b61f9ce25f954ff181c007b0a2c62b0294a23f74b91357dbbaa13b08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:59:37 np0005475493 condescending_golick[211899]: 167 167
Oct  8 05:59:37 np0005475493 systemd[1]: libpod-7c35dfd7b61f9ce25f954ff181c007b0a2c62b0294a23f74b91357dbbaa13b08.scope: Deactivated successfully.
Oct  8 05:59:37 np0005475493 podman[211842]: 2025-10-08 09:59:37.459749421 +0000 UTC m=+0.127421092 container died 7c35dfd7b61f9ce25f954ff181c007b0a2c62b0294a23f74b91357dbbaa13b08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_golick, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  8 05:59:37 np0005475493 systemd[1]: var-lib-containers-storage-overlay-2518039d2939ed4151eb4133fea793cc9536eae744575d594c88d8b1d50311c0-merged.mount: Deactivated successfully.
Oct  8 05:59:37 np0005475493 podman[211842]: 2025-10-08 09:59:37.500179704 +0000 UTC m=+0.167851375 container remove 7c35dfd7b61f9ce25f954ff181c007b0a2c62b0294a23f74b91357dbbaa13b08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:59:37 np0005475493 systemd[1]: libpod-conmon-7c35dfd7b61f9ce25f954ff181c007b0a2c62b0294a23f74b91357dbbaa13b08.scope: Deactivated successfully.
Oct  8 05:59:37 np0005475493 podman[211952]: 2025-10-08 09:59:37.657584472 +0000 UTC m=+0.044816715 container create 748f49c47fe67cdac74267bc3d64067d33db18c77dd26f920e78514ba9cb5715 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 05:59:37 np0005475493 systemd[1]: Started libpod-conmon-748f49c47fe67cdac74267bc3d64067d33db18c77dd26f920e78514ba9cb5715.scope.
Oct  8 05:59:37 np0005475493 python3.9[211939]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:59:37 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:59:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5689a59894d6119bd944b9ecb295b34ad9a6745d3d9d017fcb8199da0e5605c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:59:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5689a59894d6119bd944b9ecb295b34ad9a6745d3d9d017fcb8199da0e5605c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:59:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5689a59894d6119bd944b9ecb295b34ad9a6745d3d9d017fcb8199da0e5605c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:59:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5689a59894d6119bd944b9ecb295b34ad9a6745d3d9d017fcb8199da0e5605c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:59:37 np0005475493 podman[211952]: 2025-10-08 09:59:37.721481828 +0000 UTC m=+0.108714101 container init 748f49c47fe67cdac74267bc3d64067d33db18c77dd26f920e78514ba9cb5715 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_babbage, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:59:37 np0005475493 podman[211952]: 2025-10-08 09:59:37.728686815 +0000 UTC m=+0.115919058 container start 748f49c47fe67cdac74267bc3d64067d33db18c77dd26f920e78514ba9cb5715 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_babbage, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  8 05:59:37 np0005475493 podman[211952]: 2025-10-08 09:59:37.642112092 +0000 UTC m=+0.029344365 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:59:37 np0005475493 podman[211952]: 2025-10-08 09:59:37.731790722 +0000 UTC m=+0.119022965 container attach 748f49c47fe67cdac74267bc3d64067d33db18c77dd26f920e78514ba9cb5715 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_babbage, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  8 05:59:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:37.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]: {
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:    "1": [
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:        {
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:            "devices": [
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:                "/dev/loop3"
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:            ],
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:            "lv_name": "ceph_lv0",
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:            "lv_size": "21470642176",
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:            "name": "ceph_lv0",
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:            "tags": {
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:                "ceph.cephx_lockbox_secret": "",
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:                "ceph.cluster_name": "ceph",
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:                "ceph.crush_device_class": "",
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:                "ceph.encrypted": "0",
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:                "ceph.osd_id": "1",
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:                "ceph.type": "block",
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:                "ceph.vdo": "0",
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:                "ceph.with_tpm": "0"
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:            },
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:            "type": "block",
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:            "vg_name": "ceph_vg0"
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:        }
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]:    ]
Oct  8 05:59:38 np0005475493 hungry_babbage[211969]: }
Oct  8 05:59:38 np0005475493 systemd[1]: libpod-748f49c47fe67cdac74267bc3d64067d33db18c77dd26f920e78514ba9cb5715.scope: Deactivated successfully.
Oct  8 05:59:38 np0005475493 podman[211952]: 2025-10-08 09:59:38.050515939 +0000 UTC m=+0.437748182 container died 748f49c47fe67cdac74267bc3d64067d33db18c77dd26f920e78514ba9cb5715 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:59:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:38.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:38 np0005475493 systemd[1]: var-lib-containers-storage-overlay-5689a59894d6119bd944b9ecb295b34ad9a6745d3d9d017fcb8199da0e5605c1-merged.mount: Deactivated successfully.
Oct  8 05:59:38 np0005475493 podman[211952]: 2025-10-08 09:59:38.097391043 +0000 UTC m=+0.484623276 container remove 748f49c47fe67cdac74267bc3d64067d33db18c77dd26f920e78514ba9cb5715 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:59:38 np0005475493 systemd[1]: libpod-conmon-748f49c47fe67cdac74267bc3d64067d33db18c77dd26f920e78514ba9cb5715.scope: Deactivated successfully.
Oct  8 05:59:38 np0005475493 python3.9[212101]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917577.2056708-2285-211746489354310/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:59:38 np0005475493 podman[212352]: 2025-10-08 09:59:38.618892971 +0000 UTC m=+0.043198580 container create 439f3487353009913c924ba15cceea273ebdff5a23225adc02c0862f1c03229a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 05:59:38 np0005475493 systemd[1]: Started libpod-conmon-439f3487353009913c924ba15cceea273ebdff5a23225adc02c0862f1c03229a.scope.
Oct  8 05:59:38 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:59:38 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v441: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 259 B/s rd, 0 op/s
Oct  8 05:59:38 np0005475493 podman[212352]: 2025-10-08 09:59:38.60134948 +0000 UTC m=+0.025655109 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:59:38 np0005475493 podman[212352]: 2025-10-08 09:59:38.694517279 +0000 UTC m=+0.118822908 container init 439f3487353009913c924ba15cceea273ebdff5a23225adc02c0862f1c03229a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 05:59:38 np0005475493 podman[212352]: 2025-10-08 09:59:38.701215138 +0000 UTC m=+0.125520747 container start 439f3487353009913c924ba15cceea273ebdff5a23225adc02c0862f1c03229a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_cerf, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  8 05:59:38 np0005475493 podman[212352]: 2025-10-08 09:59:38.703800727 +0000 UTC m=+0.128106336 container attach 439f3487353009913c924ba15cceea273ebdff5a23225adc02c0862f1c03229a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_cerf, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 05:59:38 np0005475493 focused_cerf[212374]: 167 167
Oct  8 05:59:38 np0005475493 systemd[1]: libpod-439f3487353009913c924ba15cceea273ebdff5a23225adc02c0862f1c03229a.scope: Deactivated successfully.
Oct  8 05:59:38 np0005475493 podman[212352]: 2025-10-08 09:59:38.705963761 +0000 UTC m=+0.130269370 container died 439f3487353009913c924ba15cceea273ebdff5a23225adc02c0862f1c03229a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:59:38 np0005475493 systemd[1]: var-lib-containers-storage-overlay-2af6956d2899801a6cfb17b663ac6f7531ed0893761dab4ff6e1c4f72a0dc90d-merged.mount: Deactivated successfully.
Oct  8 05:59:38 np0005475493 podman[212352]: 2025-10-08 09:59:38.738599297 +0000 UTC m=+0.162904906 container remove 439f3487353009913c924ba15cceea273ebdff5a23225adc02c0862f1c03229a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  8 05:59:38 np0005475493 systemd[1]: libpod-conmon-439f3487353009913c924ba15cceea273ebdff5a23225adc02c0862f1c03229a.scope: Deactivated successfully.
Oct  8 05:59:38 np0005475493 python3.9[212367]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:59:38 np0005475493 podman[212401]: 2025-10-08 09:59:38.885599729 +0000 UTC m=+0.039160271 container create 020d9885466ceebb0d7a41be8f46bbd19f537f0bf1b35aa9ad60ef60705779ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_varahamihira, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 05:59:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:38.905Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:59:38 np0005475493 systemd[1]: Started libpod-conmon-020d9885466ceebb0d7a41be8f46bbd19f537f0bf1b35aa9ad60ef60705779ea.scope.
Oct  8 05:59:38 np0005475493 systemd[1]: Started libcrun container.
Oct  8 05:59:38 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37dba5f71e4da9032834e67fdae6d0870602d49cc48678f48c4b98082c7d0c02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 05:59:38 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37dba5f71e4da9032834e67fdae6d0870602d49cc48678f48c4b98082c7d0c02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 05:59:38 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37dba5f71e4da9032834e67fdae6d0870602d49cc48678f48c4b98082c7d0c02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:59:38 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37dba5f71e4da9032834e67fdae6d0870602d49cc48678f48c4b98082c7d0c02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 05:59:38 np0005475493 podman[212401]: 2025-10-08 09:59:38.963022308 +0000 UTC m=+0.116582870 container init 020d9885466ceebb0d7a41be8f46bbd19f537f0bf1b35aa9ad60ef60705779ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_varahamihira, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:59:38 np0005475493 podman[212401]: 2025-10-08 09:59:38.867640104 +0000 UTC m=+0.021200666 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:59:38 np0005475493 podman[212401]: 2025-10-08 09:59:38.969993666 +0000 UTC m=+0.123554208 container start 020d9885466ceebb0d7a41be8f46bbd19f537f0bf1b35aa9ad60ef60705779ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True)
Oct  8 05:59:38 np0005475493 podman[212401]: 2025-10-08 09:59:38.972563145 +0000 UTC m=+0.126123697 container attach 020d9885466ceebb0d7a41be8f46bbd19f537f0bf1b35aa9ad60ef60705779ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_varahamihira, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 05:59:39 np0005475493 python3.9[212542]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917578.336809-2285-188981391456574/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:39 np0005475493 lvm[212688]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 05:59:39 np0005475493 lvm[212688]: VG ceph_vg0 finished
Oct  8 05:59:39 np0005475493 reverent_varahamihira[212449]: {}
Oct  8 05:59:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095939 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 05:59:39 np0005475493 systemd[1]: libpod-020d9885466ceebb0d7a41be8f46bbd19f537f0bf1b35aa9ad60ef60705779ea.scope: Deactivated successfully.
Oct  8 05:59:39 np0005475493 podman[212401]: 2025-10-08 09:59:39.67928154 +0000 UTC m=+0.832842092 container died 020d9885466ceebb0d7a41be8f46bbd19f537f0bf1b35aa9ad60ef60705779ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  8 05:59:39 np0005475493 systemd[1]: libpod-020d9885466ceebb0d7a41be8f46bbd19f537f0bf1b35aa9ad60ef60705779ea.scope: Consumed 1.051s CPU time.
Oct  8 05:59:39 np0005475493 systemd[1]: var-lib-containers-storage-overlay-37dba5f71e4da9032834e67fdae6d0870602d49cc48678f48c4b98082c7d0c02-merged.mount: Deactivated successfully.
Oct  8 05:59:39 np0005475493 podman[212401]: 2025-10-08 09:59:39.721833147 +0000 UTC m=+0.875393709 container remove 020d9885466ceebb0d7a41be8f46bbd19f537f0bf1b35aa9ad60ef60705779ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  8 05:59:39 np0005475493 systemd[1]: libpod-conmon-020d9885466ceebb0d7a41be8f46bbd19f537f0bf1b35aa9ad60ef60705779ea.scope: Deactivated successfully.
Oct  8 05:59:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 05:59:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:39.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:59:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 05:59:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:59:39 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:59:39 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 05:59:39 np0005475493 python3.9[212776]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:59:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:40.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:40 np0005475493 python3.9[212925]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917579.50294-2285-683954650607/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v442: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 259 B/s rd, 0 op/s
Oct  8 05:59:41 np0005475493 python3.9[213077]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 05:59:41 np0005475493 python3.9[213201]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917580.7190804-2285-120139854877645/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:41.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:42.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v443: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 259 B/s rd, 0 op/s
Oct  8 05:59:43 np0005475493 python3.9[213352]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 05:59:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:59:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:43.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:44 np0005475493 python3.9[213534]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct  8 05:59:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:44.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v444: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 433 B/s rd, 0 op/s
Oct  8 05:59:45 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Scheduled restart job, restart counter is at 6.
Oct  8 05:59:45 np0005475493 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:59:45 np0005475493 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct  8 05:59:45 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.494s CPU time.
Oct  8 05:59:45 np0005475493 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 05:59:45 np0005475493 podman[213616]: 2025-10-08 09:59:45.689149787 +0000 UTC m=+0.075511035 container create 2a68b9f1bcb66211021ed9b4fd46add9bf3082d3ff8f1593df68d96a304a7aa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  8 05:59:45 np0005475493 podman[213616]: 2025-10-08 09:59:45.642487681 +0000 UTC m=+0.028848959 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 05:59:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:59:45] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct  8 05:59:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:59:45] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct  8 05:59:45 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d95dca391282c1ee9ead62ef4a6924429d82eaa2356c084f9e1be43d78b2b69/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  8 05:59:45 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d95dca391282c1ee9ead62ef4a6924429d82eaa2356c084f9e1be43d78b2b69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 05:59:45 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d95dca391282c1ee9ead62ef4a6924429d82eaa2356c084f9e1be43d78b2b69/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:59:45 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d95dca391282c1ee9ead62ef4a6924429d82eaa2356c084f9e1be43d78b2b69/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.uynkmx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 05:59:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:45.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:45 np0005475493 podman[213616]: 2025-10-08 09:59:45.791281423 +0000 UTC m=+0.177642701 container init 2a68b9f1bcb66211021ed9b4fd46add9bf3082d3ff8f1593df68d96a304a7aa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct  8 05:59:45 np0005475493 podman[213616]: 2025-10-08 09:59:45.798141377 +0000 UTC m=+0.184502625 container start 2a68b9f1bcb66211021ed9b4fd46add9bf3082d3ff8f1593df68d96a304a7aa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 05:59:45 np0005475493 bash[213616]: 2a68b9f1bcb66211021ed9b4fd46add9bf3082d3ff8f1593df68d96a304a7aa2
Oct  8 05:59:45 np0005475493 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 05:59:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:45 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  8 05:59:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:45 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  8 05:59:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:45 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  8 05:59:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:45 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  8 05:59:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:45 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  8 05:59:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:45 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  8 05:59:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:45 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  8 05:59:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:45 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:59:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:46.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:46 np0005475493 python3.9[213801]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v445: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 05:59:46 np0005475493 python3.9[213953]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:47.034Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:59:47 np0005475493 python3.9[214106]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_09:59:47
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['volumes', 'backups', '.nfs', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', '.rgw.root', '.mgr', 'vms', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta']
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 05:59:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:47.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 05:59:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:59:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:59:48 np0005475493 python3.9[214258]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:48.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:59:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:59:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 05:59:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 05:59:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 05:59:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 05:59:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 05:59:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 05:59:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 05:59:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:59:48 np0005475493 python3.9[214411]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v446: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 05:59:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:48.908Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:59:49 np0005475493 python3.9[214564]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:49.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:50.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:50 np0005475493 python3.9[214717]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:50 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v447: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct  8 05:59:50 np0005475493 python3.9[214869]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:51 np0005475493 python3.9[215022]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:51.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:51 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 05:59:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:51 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:59:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:51 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 05:59:51 np0005475493 podman[215146]: 2025-10-08 09:59:51.97414355 +0000 UTC m=+0.125060381 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  8 05:59:52 np0005475493 python3.9[215192]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct  8 05:59:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:52.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct  8 05:59:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/095952 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 05:59:52 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v448: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct  8 05:59:53 np0005475493 python3.9[215354]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  8 05:59:53 np0005475493 systemd[1]: Reloading.
Oct  8 05:59:53 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:59:53 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:59:53 np0005475493 systemd[1]: Starting libvirt logging daemon socket...
Oct  8 05:59:53 np0005475493 systemd[1]: Listening on libvirt logging daemon socket.
Oct  8 05:59:53 np0005475493 systemd[1]: Starting libvirt logging daemon admin socket...
Oct  8 05:59:53 np0005475493 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct  8 05:59:53 np0005475493 systemd[1]: Starting libvirt logging daemon...
Oct  8 05:59:53 np0005475493 systemd[1]: Started libvirt logging daemon.
Oct  8 05:59:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:59:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:53.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:54.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:54 np0005475493 python3.9[215550]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  8 05:59:54 np0005475493 systemd[1]: Reloading.
Oct  8 05:59:54 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:59:54 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:59:54 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v449: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 1 op/s
Oct  8 05:59:54 np0005475493 systemd[1]: Starting libvirt nodedev daemon socket...
Oct  8 05:59:54 np0005475493 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct  8 05:59:54 np0005475493 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct  8 05:59:54 np0005475493 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct  8 05:59:54 np0005475493 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct  8 05:59:54 np0005475493 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct  8 05:59:54 np0005475493 systemd[1]: Starting libvirt nodedev daemon...
Oct  8 05:59:54 np0005475493 systemd[1]: Started libvirt nodedev daemon.
Oct  8 05:59:55 np0005475493 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct  8 05:59:55 np0005475493 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct  8 05:59:55 np0005475493 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct  8 05:59:55 np0005475493 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct  8 05:59:55 np0005475493 python3.9[215767]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  8 05:59:55 np0005475493 systemd[1]: Reloading.
Oct  8 05:59:55 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:59:55 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:59:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:09:59:55] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 05:59:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:09:59:55] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 05:59:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:55.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:55 np0005475493 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct  8 05:59:55 np0005475493 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct  8 05:59:55 np0005475493 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct  8 05:59:55 np0005475493 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct  8 05:59:55 np0005475493 systemd[1]: Starting libvirt proxy daemon...
Oct  8 05:59:55 np0005475493 systemd[1]: Started libvirt proxy daemon.
Oct  8 05:59:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:55 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:59:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:55 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 05:59:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:55 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:59:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:56 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 05:59:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct  8 05:59:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:56.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct  8 05:59:56 np0005475493 setroubleshoot[215690]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 6bfffd59-0a64-4f0c-b98b-fdb9bc299a30
Oct  8 05:59:56 np0005475493 setroubleshoot[215690]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct  8 05:59:56 np0005475493 setroubleshoot[215690]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 6bfffd59-0a64-4f0c-b98b-fdb9bc299a30
Oct  8 05:59:56 np0005475493 setroubleshoot[215690]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct  8 05:59:56 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v450: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct  8 05:59:56 np0005475493 python3.9[215988]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  8 05:59:56 np0005475493 systemd[1]: Reloading.
Oct  8 05:59:56 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:59:56 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:59:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:57.034Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:59:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:57.037Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 05:59:57 np0005475493 systemd[1]: Listening on libvirt locking daemon socket.
Oct  8 05:59:57 np0005475493 systemd[1]: Starting libvirt QEMU daemon socket...
Oct  8 05:59:57 np0005475493 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct  8 05:59:57 np0005475493 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct  8 05:59:57 np0005475493 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct  8 05:59:57 np0005475493 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct  8 05:59:57 np0005475493 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct  8 05:59:57 np0005475493 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct  8 05:59:57 np0005475493 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct  8 05:59:57 np0005475493 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct  8 05:59:57 np0005475493 systemd[1]: Starting libvirt QEMU daemon...
Oct  8 05:59:57 np0005475493 systemd[1]: Started libvirt QEMU daemon.
Oct  8 05:59:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:59:57.398 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 05:59:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:59:57.399 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 05:59:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 09:59:57.399 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 05:59:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:57.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:57 np0005475493 python3.9[216203]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  8 05:59:57 np0005475493 systemd[1]: Reloading.
Oct  8 05:59:58 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 05:59:58 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 05:59:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:09:59:58.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 05:59:58 np0005475493 systemd[1]: Starting libvirt secret daemon socket...
Oct  8 05:59:58 np0005475493 systemd[1]: Listening on libvirt secret daemon socket.
Oct  8 05:59:58 np0005475493 systemd[1]: Starting libvirt secret daemon admin socket...
Oct  8 05:59:58 np0005475493 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct  8 05:59:58 np0005475493 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct  8 05:59:58 np0005475493 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct  8 05:59:58 np0005475493 systemd[1]: Starting libvirt secret daemon...
Oct  8 05:59:58 np0005475493 systemd[1]: Started libvirt secret daemon.
Oct  8 05:59:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 05:59:58 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v451: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct  8 05:59:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:58 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 05:59:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:58 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 05:59:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 09:59:58 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 05:59:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T09:59:58.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 05:59:59 np0005475493 podman[216387]: 2025-10-08 09:59:59.35367864 +0000 UTC m=+0.053775400 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  8 05:59:59 np0005475493 python3.9[216434]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 05:59:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 05:59:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 05:59:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:09:59:59.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:00 np0005475493 ceph-mon[73572]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Oct  8 06:00:00 np0005475493 ceph-mon[73572]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Oct  8 06:00:00 np0005475493 ceph-mon[73572]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.2.0.compute-0.uynkmx on compute-0 is in unknown state
Oct  8 06:00:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:00.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:00 np0005475493 python3.9[216587]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  8 06:00:00 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v452: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 682 B/s wr, 2 op/s
Oct  8 06:00:00 np0005475493 ceph-mon[73572]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Oct  8 06:00:00 np0005475493 ceph-mon[73572]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Oct  8 06:00:00 np0005475493 ceph-mon[73572]:    daemon nfs.cephfs.2.0.compute-0.uynkmx on compute-0 is in unknown state
Oct  8 06:00:01 np0005475493 python3.9[216739]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 06:00:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:01.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:01 np0005475493 python3.9[216894]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  8 06:00:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:02.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:02 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v453: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Oct  8 06:00:02 np0005475493 python3.9[217047]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:00:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:00:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:00:03 np0005475493 python3.9[217169]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917602.3965933-3359-76328881645627/.source.xml follow=False _original_basename=secret.xml.j2 checksum=d427a8b5e6de2d31449678af6b172a3fb9e01a89 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:00:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:00:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct  8 06:00:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:03.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct  8 06:00:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:04.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:04 np0005475493 python3.9[217347]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 787292cc-8154-50c4-9e00-e9be3e817149#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 06:00:04 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v454: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct  8 06:00:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:04 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:00:05 np0005475493 python3.9[217521]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:00:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:05 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3c04000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:05 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:00:05] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Oct  8 06:00:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:00:05] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Oct  8 06:00:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:05.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:06.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:06 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:06 np0005475493 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct  8 06:00:06 np0005475493 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct  8 06:00:06 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v455: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Oct  8 06:00:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:07.039Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:00:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:07.039Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:00:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:07.040Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:00:07 np0005475493 python3.9[217990]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:00:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100007 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 06:00:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:07 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:07 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be8000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 06:00:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:07.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 06:00:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:07 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:00:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:07 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:00:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:08.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:08 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:08 np0005475493 python3.9[218143]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:00:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:00:08 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v456: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Oct  8 06:00:08 np0005475493 python3.9[218266]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917607.8698964-3524-111120123926551/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:00:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:08.911Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:00:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:09 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:09 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:09.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:10 np0005475493 python3.9[218419]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:00:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:10.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:10 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be8001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:10 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v457: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Oct  8 06:00:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:10 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 06:00:10 np0005475493 python3.9[218572]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:00:11 np0005475493 python3.9[218651]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:00:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:11 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:11 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:11.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:12 np0005475493 python3.9[218804]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:00:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 06:00:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:12.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 06:00:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:12 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:12 np0005475493 python3.9[218882]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.xwgqcnpk recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:00:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v458: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Oct  8 06:00:13 np0005475493 python3.9[219035]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:00:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:00:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:13 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be8001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:13 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:13.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:13 np0005475493 python3.9[219113]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:00:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:14.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:14 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100014 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 06:00:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v459: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct  8 06:00:14 np0005475493 python3.9[219266]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 06:00:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:15 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:00:15] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Oct  8 06:00:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:00:15] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Oct  8 06:00:15 np0005475493 python3[219420]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  8 06:00:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:15 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be8001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:15.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:16.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:16 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:16 np0005475493 python3.9[219573]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:00:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v460: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct  8 06:00:16 np0005475493 python3.9[219651]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:00:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:17.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:00:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:17.041Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:00:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:17.041Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:00:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:17 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:17 np0005475493 python3.9[219804]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:00:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:17 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:17.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:00:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:00:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:00:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:00:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:18.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:00:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:00:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:00:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:00:18 np0005475493 python3.9[219883]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:00:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:18 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be8002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:00:18 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v461: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct  8 06:00:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:18.911Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:00:18 np0005475493 python3.9[220035]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:00:19 np0005475493 python3.9[220114]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:00:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:19 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:19 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 06:00:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:19.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 06:00:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 06:00:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:20.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 06:00:20 np0005475493 python3.9[220267]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:00:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:20 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:20 np0005475493 python3.9[220345]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:00:20 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v462: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct  8 06:00:21 np0005475493 python3.9[220498]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:00:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:21 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be8002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:21 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct  8 06:00:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:21.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct  8 06:00:22 np0005475493 python3.9[220623]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759917620.9783435-3899-14575891026483/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:00:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:22.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:22 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:22 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v463: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct  8 06:00:22 np0005475493 podman[220748]: 2025-10-08 10:00:22.792969858 +0000 UTC m=+0.104592761 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:00:22 np0005475493 python3.9[220792]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:00:23 np0005475493 python3.9[220955]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 06:00:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:00:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:23 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:23 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be8002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:23.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:24.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:24 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:24 np0005475493 python3.9[221136]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:00:24 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v464: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct  8 06:00:25 np0005475493 python3.9[221288]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 06:00:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:25 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:00:25] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct  8 06:00:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:00:25] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct  8 06:00:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:25 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:25.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:26 np0005475493 python3.9[221442]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 06:00:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct  8 06:00:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:26.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct  8 06:00:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:26 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be8004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:26 np0005475493 python3.9[221597]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 06:00:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v465: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.799775) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917626799851, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4199, "num_deletes": 502, "total_data_size": 8625206, "memory_usage": 8754368, "flush_reason": "Manual Compaction"}
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917626845670, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 8359867, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13198, "largest_seqno": 17396, "table_properties": {"data_size": 8342098, "index_size": 12023, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4677, "raw_key_size": 36519, "raw_average_key_size": 19, "raw_value_size": 8305374, "raw_average_value_size": 4477, "num_data_blocks": 525, "num_entries": 1855, "num_filter_entries": 1855, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759917182, "oldest_key_time": 1759917182, "file_creation_time": 1759917626, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 45893 microseconds, and 24175 cpu microseconds.
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.845706) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 8359867 bytes OK
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.845722) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.847189) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.847201) EVENT_LOG_v1 {"time_micros": 1759917626847197, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.847215) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 8608427, prev total WAL file size 8608427, number of live WAL files 2.
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.849112) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(8163KB)], [32(12MB)]
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917626849201, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 20989986, "oldest_snapshot_seqno": -1}
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 5061 keys, 15487052 bytes, temperature: kUnknown
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917626918324, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 15487052, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15448451, "index_size": 24859, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12677, "raw_key_size": 126580, "raw_average_key_size": 25, "raw_value_size": 15351800, "raw_average_value_size": 3033, "num_data_blocks": 1043, "num_entries": 5061, "num_filter_entries": 5061, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759917626, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.918689) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15487052 bytes
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.920302) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 303.2 rd, 223.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(8.0, 12.0 +0.0 blob) out(14.8 +0.0 blob), read-write-amplify(4.4) write-amplify(1.9) OK, records in: 6083, records dropped: 1022 output_compression: NoCompression
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.920339) EVENT_LOG_v1 {"time_micros": 1759917626920322, "job": 14, "event": "compaction_finished", "compaction_time_micros": 69230, "compaction_time_cpu_micros": 34526, "output_level": 6, "num_output_files": 1, "total_output_size": 15487052, "num_input_records": 6083, "num_output_records": 5061, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917626923428, "job": 14, "event": "table_file_deletion", "file_number": 34}
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917626928415, "job": 14, "event": "table_file_deletion", "file_number": 32}
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.848968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.928506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.928512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.928514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.928515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:00:26 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:00:26.928517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:00:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:27.042Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:00:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:27.042Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:00:27 np0005475493 python3.9[221753]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:00:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:27 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:27 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:27.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000035s ======
Oct  8 06:00:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:28.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct  8 06:00:28 np0005475493 python3.9[221906]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:00:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:28 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:00:28 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v466: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:00:28 np0005475493 python3.9[222029]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917627.6770072-4115-171306775214405/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:00:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:28.912Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:00:29 np0005475493 python3.9[222182]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:00:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:29 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be8004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:29 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:29.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:29 np0005475493 podman[222277]: 2025-10-08 10:00:29.835206896 +0000 UTC m=+0.049634331 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  8 06:00:30 np0005475493 python3.9[222324]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917628.9816735-4160-102771871899239/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:00:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:30.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:30 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:30 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v467: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:00:30 np0005475493 python3.9[222478]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:00:31 np0005475493 python3.9[222602]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917630.3207877-4205-107230438104499/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:00:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:31 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:31 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:31.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:32.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:32 np0005475493 python3.9[222755]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 06:00:32 np0005475493 systemd[1]: Reloading.
Oct  8 06:00:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:32 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be8004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:32 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 06:00:32 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 06:00:32 np0005475493 systemd[1]: Reached target edpm_libvirt.target.
Oct  8 06:00:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v468: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:00:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:00:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:00:33 np0005475493 python3.9[222948]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  8 06:00:33 np0005475493 systemd[1]: Reloading.
Oct  8 06:00:33 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 06:00:33 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 06:00:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:00:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:33 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:33 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:33 np0005475493 systemd[1]: Reloading.
Oct  8 06:00:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:33.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:33 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 06:00:33 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 06:00:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:34.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:34 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:34 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v469: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct  8 06:00:34 np0005475493 systemd[1]: session-54.scope: Deactivated successfully.
Oct  8 06:00:34 np0005475493 systemd[1]: session-54.scope: Consumed 3min 26.799s CPU time.
Oct  8 06:00:34 np0005475493 systemd-logind[798]: Session 54 logged out. Waiting for processes to exit.
Oct  8 06:00:34 np0005475493 systemd-logind[798]: Removed session 54.
Oct  8 06:00:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:35 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be8004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:00:35] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:00:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:00:35] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:00:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:35 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be8004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:35.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:00:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:36.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:00:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:36 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:36 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v470: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:00:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:37.043Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:00:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:37 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bf8003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:37 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:37.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:38.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:38 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bd4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:00:38 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v471: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:00:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:38.913Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:00:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:38.914Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:00:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:38.914Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:00:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100039 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 06:00:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:39 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:39 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3c04000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:39.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:40.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:40 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v472: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:00:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:00:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:00:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:00:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:00:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v473: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 303 B/s rd, 0 op/s
Oct  8 06:00:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v474: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 377 B/s rd, 0 op/s
Oct  8 06:00:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:00:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:00:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:00:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:00:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:00:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:00:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:00:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:00:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:00:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:00:40 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:00:40 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:00:40 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:00:40 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:00:41 np0005475493 ceph-osd[81751]: bluestore.MempoolThread fragmentation_score=0.000031 took=0.000080s
Oct  8 06:00:41 np0005475493 podman[223223]: 2025-10-08 10:00:41.370177462 +0000 UTC m=+0.042812228 container create a672a82230da747907f40af503ad408f09f09425acb60239ce162640586ea14d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_proskuriakova, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct  8 06:00:41 np0005475493 systemd[1]: Started libpod-conmon-a672a82230da747907f40af503ad408f09f09425acb60239ce162640586ea14d.scope.
Oct  8 06:00:41 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:00:41 np0005475493 podman[223223]: 2025-10-08 10:00:41.352744511 +0000 UTC m=+0.025379307 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:00:41 np0005475493 podman[223223]: 2025-10-08 10:00:41.45619673 +0000 UTC m=+0.128831546 container init a672a82230da747907f40af503ad408f09f09425acb60239ce162640586ea14d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_proskuriakova, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:00:41 np0005475493 podman[223223]: 2025-10-08 10:00:41.462863644 +0000 UTC m=+0.135498410 container start a672a82230da747907f40af503ad408f09f09425acb60239ce162640586ea14d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_proskuriakova, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  8 06:00:41 np0005475493 podman[223223]: 2025-10-08 10:00:41.466861133 +0000 UTC m=+0.139495949 container attach a672a82230da747907f40af503ad408f09f09425acb60239ce162640586ea14d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_proskuriakova, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  8 06:00:41 np0005475493 vigilant_proskuriakova[223239]: 167 167
Oct  8 06:00:41 np0005475493 systemd[1]: libpod-a672a82230da747907f40af503ad408f09f09425acb60239ce162640586ea14d.scope: Deactivated successfully.
Oct  8 06:00:41 np0005475493 podman[223223]: 2025-10-08 10:00:41.469780567 +0000 UTC m=+0.142415333 container died a672a82230da747907f40af503ad408f09f09425acb60239ce162640586ea14d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_proskuriakova, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:00:41 np0005475493 systemd[1]: var-lib-containers-storage-overlay-d27befae0dfbc73295ff6ee4e0bfad8b447e73385369f5b3e9e3b7bc6b886b7f-merged.mount: Deactivated successfully.
Oct  8 06:00:41 np0005475493 podman[223223]: 2025-10-08 10:00:41.53794742 +0000 UTC m=+0.210582196 container remove a672a82230da747907f40af503ad408f09f09425acb60239ce162640586ea14d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_proskuriakova, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  8 06:00:41 np0005475493 systemd[1]: libpod-conmon-a672a82230da747907f40af503ad408f09f09425acb60239ce162640586ea14d.scope: Deactivated successfully.
Oct  8 06:00:41 np0005475493 systemd-logind[798]: New session 55 of user zuul.
Oct  8 06:00:41 np0005475493 systemd[1]: Started Session 55 of User zuul.
Oct  8 06:00:41 np0005475493 podman[223265]: 2025-10-08 10:00:41.703966682 +0000 UTC m=+0.041412903 container create d7468a467564cbca601981e73a309d104046aad1506dbeb4a82dcd8e9d79a799 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:00:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:41 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bd40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:41 np0005475493 systemd[1]: Started libpod-conmon-d7468a467564cbca601981e73a309d104046aad1506dbeb4a82dcd8e9d79a799.scope.
Oct  8 06:00:41 np0005475493 podman[223265]: 2025-10-08 10:00:41.687361318 +0000 UTC m=+0.024807569 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:00:41 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:00:41 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a92e72c2c5dd55512e683116adf994aa1dc567632d68c118449e6276a06aaca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:00:41 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a92e72c2c5dd55512e683116adf994aa1dc567632d68c118449e6276a06aaca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:00:41 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a92e72c2c5dd55512e683116adf994aa1dc567632d68c118449e6276a06aaca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:00:41 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a92e72c2c5dd55512e683116adf994aa1dc567632d68c118449e6276a06aaca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:00:41 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a92e72c2c5dd55512e683116adf994aa1dc567632d68c118449e6276a06aaca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:00:41 np0005475493 podman[223265]: 2025-10-08 10:00:41.808453594 +0000 UTC m=+0.145899835 container init d7468a467564cbca601981e73a309d104046aad1506dbeb4a82dcd8e9d79a799 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hertz, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  8 06:00:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:41 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:41 np0005475493 podman[223265]: 2025-10-08 10:00:41.817553896 +0000 UTC m=+0.155000117 container start d7468a467564cbca601981e73a309d104046aad1506dbeb4a82dcd8e9d79a799 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hertz, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  8 06:00:41 np0005475493 podman[223265]: 2025-10-08 10:00:41.821841945 +0000 UTC m=+0.159288186 container attach d7468a467564cbca601981e73a309d104046aad1506dbeb4a82dcd8e9d79a799 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hertz, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:00:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:41.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:42 np0005475493 awesome_hertz[223296]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:00:42 np0005475493 awesome_hertz[223296]: --> All data devices are unavailable
Oct  8 06:00:42 np0005475493 systemd[1]: libpod-d7468a467564cbca601981e73a309d104046aad1506dbeb4a82dcd8e9d79a799.scope: Deactivated successfully.
Oct  8 06:00:42 np0005475493 podman[223265]: 2025-10-08 10:00:42.167824647 +0000 UTC m=+0.505270878 container died d7468a467564cbca601981e73a309d104046aad1506dbeb4a82dcd8e9d79a799 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hertz, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:00:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:00:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:42.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:00:42 np0005475493 systemd[1]: var-lib-containers-storage-overlay-0a92e72c2c5dd55512e683116adf994aa1dc567632d68c118449e6276a06aaca-merged.mount: Deactivated successfully.
Oct  8 06:00:42 np0005475493 podman[223265]: 2025-10-08 10:00:42.218302281 +0000 UTC m=+0.555748502 container remove d7468a467564cbca601981e73a309d104046aad1506dbeb4a82dcd8e9d79a799 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hertz, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  8 06:00:42 np0005475493 systemd[1]: libpod-conmon-d7468a467564cbca601981e73a309d104046aad1506dbeb4a82dcd8e9d79a799.scope: Deactivated successfully.
Oct  8 06:00:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:42 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3c04001e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:42 np0005475493 python3.9[223507]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 06:00:42 np0005475493 podman[223553]: 2025-10-08 10:00:42.790884805 +0000 UTC m=+0.042203929 container create 12181e9c4f8d14b44f2e591efa32b7340712d8bbea02408bf87f34dfbda2aafe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_mclean, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  8 06:00:42 np0005475493 systemd[1]: Started libpod-conmon-12181e9c4f8d14b44f2e591efa32b7340712d8bbea02408bf87f34dfbda2aafe.scope.
Oct  8 06:00:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v475: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct  8 06:00:42 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:00:42 np0005475493 podman[223553]: 2025-10-08 10:00:42.774782337 +0000 UTC m=+0.026101481 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:00:42 np0005475493 podman[223553]: 2025-10-08 10:00:42.883594518 +0000 UTC m=+0.134913652 container init 12181e9c4f8d14b44f2e591efa32b7340712d8bbea02408bf87f34dfbda2aafe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_mclean, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct  8 06:00:42 np0005475493 podman[223553]: 2025-10-08 10:00:42.890643784 +0000 UTC m=+0.141962898 container start 12181e9c4f8d14b44f2e591efa32b7340712d8bbea02408bf87f34dfbda2aafe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_mclean, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:00:42 np0005475493 podman[223553]: 2025-10-08 10:00:42.894412116 +0000 UTC m=+0.145731270 container attach 12181e9c4f8d14b44f2e591efa32b7340712d8bbea02408bf87f34dfbda2aafe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  8 06:00:42 np0005475493 tender_mclean[223569]: 167 167
Oct  8 06:00:42 np0005475493 systemd[1]: libpod-12181e9c4f8d14b44f2e591efa32b7340712d8bbea02408bf87f34dfbda2aafe.scope: Deactivated successfully.
Oct  8 06:00:42 np0005475493 podman[223553]: 2025-10-08 10:00:42.898560269 +0000 UTC m=+0.149879433 container died 12181e9c4f8d14b44f2e591efa32b7340712d8bbea02408bf87f34dfbda2aafe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_mclean, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:00:42 np0005475493 systemd[1]: var-lib-containers-storage-overlay-a4560307df61da1601e80d5e32b22bf5f08e3458bbafee3ffa00c5d822a9370b-merged.mount: Deactivated successfully.
Oct  8 06:00:42 np0005475493 podman[223553]: 2025-10-08 10:00:42.952891268 +0000 UTC m=+0.204210392 container remove 12181e9c4f8d14b44f2e591efa32b7340712d8bbea02408bf87f34dfbda2aafe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_mclean, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1)
Oct  8 06:00:42 np0005475493 systemd[1]: libpod-conmon-12181e9c4f8d14b44f2e591efa32b7340712d8bbea02408bf87f34dfbda2aafe.scope: Deactivated successfully.
Oct  8 06:00:43 np0005475493 podman[223619]: 2025-10-08 10:00:43.150890778 +0000 UTC m=+0.043463839 container create 25c3774c58d7e06aba9d16d8c1a4350f3d09db5248ab2f1b1123b94abadfc0a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_khayyam, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:00:43 np0005475493 systemd[1]: Started libpod-conmon-25c3774c58d7e06aba9d16d8c1a4350f3d09db5248ab2f1b1123b94abadfc0a0.scope.
Oct  8 06:00:43 np0005475493 podman[223619]: 2025-10-08 10:00:43.129150189 +0000 UTC m=+0.021723250 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:00:43 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:00:43 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a8bb4e973894ed17f2424bbdb41c5462cb6ce9aaccd468b2c0d4b4d64bd9795/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:00:43 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a8bb4e973894ed17f2424bbdb41c5462cb6ce9aaccd468b2c0d4b4d64bd9795/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:00:43 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a8bb4e973894ed17f2424bbdb41c5462cb6ce9aaccd468b2c0d4b4d64bd9795/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:00:43 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a8bb4e973894ed17f2424bbdb41c5462cb6ce9aaccd468b2c0d4b4d64bd9795/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:00:43 np0005475493 podman[223619]: 2025-10-08 10:00:43.256180647 +0000 UTC m=+0.148753768 container init 25c3774c58d7e06aba9d16d8c1a4350f3d09db5248ab2f1b1123b94abadfc0a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:00:43 np0005475493 podman[223619]: 2025-10-08 10:00:43.268760211 +0000 UTC m=+0.161333282 container start 25c3774c58d7e06aba9d16d8c1a4350f3d09db5248ab2f1b1123b94abadfc0a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_khayyam, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  8 06:00:43 np0005475493 podman[223619]: 2025-10-08 10:00:43.273598226 +0000 UTC m=+0.166171307 container attach 25c3774c58d7e06aba9d16d8c1a4350f3d09db5248ab2f1b1123b94abadfc0a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]: {
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:    "1": [
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:        {
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:            "devices": [
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:                "/dev/loop3"
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:            ],
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:            "lv_name": "ceph_lv0",
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:            "lv_size": "21470642176",
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:            "name": "ceph_lv0",
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:            "tags": {
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:                "ceph.cluster_name": "ceph",
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:                "ceph.crush_device_class": "",
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:                "ceph.encrypted": "0",
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:                "ceph.osd_id": "1",
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:                "ceph.type": "block",
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:                "ceph.vdo": "0",
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:                "ceph.with_tpm": "0"
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:            },
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:            "type": "block",
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:            "vg_name": "ceph_vg0"
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:        }
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]:    ]
Oct  8 06:00:43 np0005475493 strange_khayyam[223635]: }
Oct  8 06:00:43 np0005475493 systemd[1]: libpod-25c3774c58d7e06aba9d16d8c1a4350f3d09db5248ab2f1b1123b94abadfc0a0.scope: Deactivated successfully.
Oct  8 06:00:43 np0005475493 podman[223619]: 2025-10-08 10:00:43.615942832 +0000 UTC m=+0.508515883 container died 25c3774c58d7e06aba9d16d8c1a4350f3d09db5248ab2f1b1123b94abadfc0a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_khayyam, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:00:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:00:43 np0005475493 systemd[1]: var-lib-containers-storage-overlay-6a8bb4e973894ed17f2424bbdb41c5462cb6ce9aaccd468b2c0d4b4d64bd9795-merged.mount: Deactivated successfully.
Oct  8 06:00:43 np0005475493 podman[223619]: 2025-10-08 10:00:43.662582092 +0000 UTC m=+0.555155113 container remove 25c3774c58d7e06aba9d16d8c1a4350f3d09db5248ab2f1b1123b94abadfc0a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_khayyam, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:00:43 np0005475493 systemd[1]: libpod-conmon-25c3774c58d7e06aba9d16d8c1a4350f3d09db5248ab2f1b1123b94abadfc0a0.scope: Deactivated successfully.
Oct  8 06:00:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:43 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:43 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bd40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:43.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:44 np0005475493 python3.9[223858]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:00:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:44.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:44 np0005475493 podman[223924]: 2025-10-08 10:00:44.201871575 +0000 UTC m=+0.042657754 container create 191de1e0210538fa7caa9427ce9a6d47154464afcd70bd850f92835c8da49cc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_shamir, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  8 06:00:44 np0005475493 systemd[1]: Started libpod-conmon-191de1e0210538fa7caa9427ce9a6d47154464afcd70bd850f92835c8da49cc8.scope.
Oct  8 06:00:44 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:00:44 np0005475493 podman[223924]: 2025-10-08 10:00:44.180834138 +0000 UTC m=+0.021620347 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:00:44 np0005475493 podman[223924]: 2025-10-08 10:00:44.291571912 +0000 UTC m=+0.132358101 container init 191de1e0210538fa7caa9427ce9a6d47154464afcd70bd850f92835c8da49cc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:00:44 np0005475493 podman[223924]: 2025-10-08 10:00:44.303429613 +0000 UTC m=+0.144215782 container start 191de1e0210538fa7caa9427ce9a6d47154464afcd70bd850f92835c8da49cc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_shamir, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:00:44 np0005475493 podman[223924]: 2025-10-08 10:00:44.306723608 +0000 UTC m=+0.147509797 container attach 191de1e0210538fa7caa9427ce9a6d47154464afcd70bd850f92835c8da49cc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:00:44 np0005475493 quirky_shamir[223969]: 167 167
Oct  8 06:00:44 np0005475493 systemd[1]: libpod-191de1e0210538fa7caa9427ce9a6d47154464afcd70bd850f92835c8da49cc8.scope: Deactivated successfully.
Oct  8 06:00:44 np0005475493 podman[223924]: 2025-10-08 10:00:44.311793392 +0000 UTC m=+0.152579591 container died 191de1e0210538fa7caa9427ce9a6d47154464afcd70bd850f92835c8da49cc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_shamir, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct  8 06:00:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:44 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:44 np0005475493 systemd[1]: var-lib-containers-storage-overlay-85816907de03f36e3888a37cc2ac484b8be21399f188a9c340c068fe0432098a-merged.mount: Deactivated successfully.
Oct  8 06:00:44 np0005475493 podman[223924]: 2025-10-08 10:00:44.363167425 +0000 UTC m=+0.203953594 container remove 191de1e0210538fa7caa9427ce9a6d47154464afcd70bd850f92835c8da49cc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  8 06:00:44 np0005475493 systemd[1]: libpod-conmon-191de1e0210538fa7caa9427ce9a6d47154464afcd70bd850f92835c8da49cc8.scope: Deactivated successfully.
Oct  8 06:00:44 np0005475493 podman[224086]: 2025-10-08 10:00:44.526989316 +0000 UTC m=+0.046258550 container create de0f96888bafec752d6060b41466b164606ce99a9899e328f201bf3587ef1e5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  8 06:00:44 np0005475493 systemd[1]: Started libpod-conmon-de0f96888bafec752d6060b41466b164606ce99a9899e328f201bf3587ef1e5c.scope.
Oct  8 06:00:44 np0005475493 podman[224086]: 2025-10-08 10:00:44.508621525 +0000 UTC m=+0.027890779 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:00:44 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:00:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e08030753090018ec54c0e00238dc3a27abf120e59b41776574df2d1f8f72c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:00:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e08030753090018ec54c0e00238dc3a27abf120e59b41776574df2d1f8f72c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:00:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e08030753090018ec54c0e00238dc3a27abf120e59b41776574df2d1f8f72c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:00:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e08030753090018ec54c0e00238dc3a27abf120e59b41776574df2d1f8f72c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:00:44 np0005475493 podman[224086]: 2025-10-08 10:00:44.619183253 +0000 UTC m=+0.138452517 container init de0f96888bafec752d6060b41466b164606ce99a9899e328f201bf3587ef1e5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:00:44 np0005475493 podman[224086]: 2025-10-08 10:00:44.626220369 +0000 UTC m=+0.145489613 container start de0f96888bafec752d6060b41466b164606ce99a9899e328f201bf3587ef1e5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_chatterjee, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:00:44 np0005475493 podman[224086]: 2025-10-08 10:00:44.642072059 +0000 UTC m=+0.161341323 container attach de0f96888bafec752d6060b41466b164606ce99a9899e328f201bf3587ef1e5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_chatterjee, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  8 06:00:44 np0005475493 python3.9[224102]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:00:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v476: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 252 B/s rd, 0 op/s
Oct  8 06:00:45 np0005475493 lvm[224335]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:00:45 np0005475493 lvm[224335]: VG ceph_vg0 finished
Oct  8 06:00:45 np0005475493 hardcore_chatterjee[224108]: {}
Oct  8 06:00:45 np0005475493 systemd[1]: libpod-de0f96888bafec752d6060b41466b164606ce99a9899e328f201bf3587ef1e5c.scope: Deactivated successfully.
Oct  8 06:00:45 np0005475493 systemd[1]: libpod-de0f96888bafec752d6060b41466b164606ce99a9899e328f201bf3587ef1e5c.scope: Consumed 1.185s CPU time.
Oct  8 06:00:45 np0005475493 podman[224086]: 2025-10-08 10:00:45.369100462 +0000 UTC m=+0.888369736 container died de0f96888bafec752d6060b41466b164606ce99a9899e328f201bf3587ef1e5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_chatterjee, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:00:45 np0005475493 python3.9[224329]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:00:45 np0005475493 systemd[1]: var-lib-containers-storage-overlay-3e08030753090018ec54c0e00238dc3a27abf120e59b41776574df2d1f8f72c4-merged.mount: Deactivated successfully.
Oct  8 06:00:45 np0005475493 podman[224086]: 2025-10-08 10:00:45.420242988 +0000 UTC m=+0.939512232 container remove de0f96888bafec752d6060b41466b164606ce99a9899e328f201bf3587ef1e5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_chatterjee, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:00:45 np0005475493 systemd[1]: libpod-conmon-de0f96888bafec752d6060b41466b164606ce99a9899e328f201bf3587ef1e5c.scope: Deactivated successfully.
Oct  8 06:00:45 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:00:45 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:00:45 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:00:45 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:00:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:45 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3c04001e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:00:45] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:00:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:00:45] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:00:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:45 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:00:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:45.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:00:46 np0005475493 python3.9[224527]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  8 06:00:46 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:00:46 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:00:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:00:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:46.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:00:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:46 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:46 np0005475493 python3.9[224680]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:00:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v477: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 251 B/s rd, 0 op/s
Oct  8 06:00:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:47.044Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:00:47 np0005475493 python3.9[224833]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:00:47
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'images', '.nfs', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'volumes']
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:00:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:47 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:47 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bd40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:00:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:00:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:47.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:00:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:00:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:00:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:00:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:00:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:00:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:00:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:00:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:00:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:00:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:00:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:48.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:48 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bd40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:00:48 np0005475493 python3.9[224988]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 06:00:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:48 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:00:48 np0005475493 systemd[1]: Reloading.
Oct  8 06:00:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v478: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 503 B/s wr, 2 op/s
Oct  8 06:00:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:48.915Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:00:48 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 06:00:48 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 06:00:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:49 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:49 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:49.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:50 np0005475493 python3.9[225180]: ansible-ansible.builtin.service_facts Invoked
Oct  8 06:00:50 np0005475493 network[225198]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  8 06:00:50 np0005475493 network[225199]: 'network-scripts' will be removed from distribution in near future.
Oct  8 06:00:50 np0005475493 network[225200]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  8 06:00:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:50.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:50 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bd40016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:50 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v479: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 409 B/s wr, 1 op/s
Oct  8 06:00:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:51 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bd40016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:51 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:00:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:51 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:00:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:51 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:51.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:00:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:52.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:00:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:52 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:52 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v480: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 341 B/s wr, 1 op/s
Oct  8 06:00:52 np0005475493 podman[225267]: 2025-10-08 10:00:52.924802066 +0000 UTC m=+0.085461101 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  8 06:00:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:00:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:53 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bd40016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:53 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bd40016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:53.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:54.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:54 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3be0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:00:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:54 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 06:00:54 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v481: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 06:00:55 np0005475493 python3.9[225506]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 06:00:55 np0005475493 systemd[1]: Reloading.
Oct  8 06:00:55 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 06:00:55 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 06:00:55 np0005475493 kernel: ganesha.nfsd[217647]: segfault at 50 ip 00007f3cb57d932e sp 00007f3c6e7fb210 error 4 in libntirpc.so.5.8[7f3cb57be000+2c000] likely on CPU 4 (core 0, socket 4)
Oct  8 06:00:55 np0005475493 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct  8 06:00:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[213683]: 08/10/2025 10:00:55 : epoch 68e63611 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3bdc003c10 fd 48 proxy ignored for local
Oct  8 06:00:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:00:55] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:00:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:00:55] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:00:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:55.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:55 np0005475493 systemd[1]: Started Process Core Dump (PID 225546/UID 0).
Oct  8 06:00:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:00:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:56.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:00:56 np0005475493 python3.9[225698]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 06:00:56 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v482: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Oct  8 06:00:56 np0005475493 systemd-coredump[225547]: Process 213708 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 53:#012#0  0x00007f3cb57d932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct  8 06:00:57 np0005475493 systemd[1]: systemd-coredump@6-225546-0.service: Deactivated successfully.
Oct  8 06:00:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:57.045Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:00:57 np0005475493 systemd[1]: systemd-coredump@6-225546-0.service: Consumed 1.116s CPU time.
Oct  8 06:00:57 np0005475493 podman[225780]: 2025-10-08 10:00:57.096887769 +0000 UTC m=+0.026036216 container died 2a68b9f1bcb66211021ed9b4fd46add9bf3082d3ff8f1593df68d96a304a7aa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  8 06:00:57 np0005475493 systemd[1]: var-lib-containers-storage-overlay-4d95dca391282c1ee9ead62ef4a6924429d82eaa2356c084f9e1be43d78b2b69-merged.mount: Deactivated successfully.
Oct  8 06:00:57 np0005475493 podman[225780]: 2025-10-08 10:00:57.137296513 +0000 UTC m=+0.066444950 container remove 2a68b9f1bcb66211021ed9b4fd46add9bf3082d3ff8f1593df68d96a304a7aa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  8 06:00:57 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Main process exited, code=exited, status=139/n/a
Oct  8 06:00:57 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Failed with result 'exit-code'.
Oct  8 06:00:57 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.531s CPU time.
Oct  8 06:00:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:00:57.399 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:00:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:00:57.399 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:00:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:00:57.399 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:00:57 np0005475493 python3.9[225899]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  8 06:00:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:57 np0005475493 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  8 06:00:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:57.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:57 np0005475493 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  8 06:00:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:00:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:00:58.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:00:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:00:58 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v483: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 06:00:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:58.916Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:00:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:00:58.917Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:00:59 np0005475493 podman[225913]: 2025-10-08 10:00:59.073392546 +0000 UTC m=+1.274837840 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct  8 06:00:59 np0005475493 podman[225973]: 2025-10-08 10:00:59.192167012 +0000 UTC m=+0.037764031 container create 2b1fe15183e147954b465d2ebf518218f6d9e3f0fb841efb85368d2acc8b9ad1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct  8 06:00:59 np0005475493 NetworkManager[44872]: <info>  [1759917659.2182] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/23)
Oct  8 06:00:59 np0005475493 kernel: podman0: port 1(veth0) entered blocking state
Oct  8 06:00:59 np0005475493 kernel: podman0: port 1(veth0) entered disabled state
Oct  8 06:00:59 np0005475493 kernel: veth0: entered allmulticast mode
Oct  8 06:00:59 np0005475493 kernel: veth0: entered promiscuous mode
Oct  8 06:00:59 np0005475493 NetworkManager[44872]: <info>  [1759917659.2343] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/24)
Oct  8 06:00:59 np0005475493 kernel: podman0: port 1(veth0) entered blocking state
Oct  8 06:00:59 np0005475493 kernel: podman0: port 1(veth0) entered forwarding state
Oct  8 06:00:59 np0005475493 NetworkManager[44872]: <info>  [1759917659.2363] device (veth0): carrier: link connected
Oct  8 06:00:59 np0005475493 NetworkManager[44872]: <info>  [1759917659.2365] device (podman0): carrier: link connected
Oct  8 06:00:59 np0005475493 systemd-udevd[226001]: Network interface NamePolicy= disabled on kernel command line.
Oct  8 06:00:59 np0005475493 systemd-udevd[225998]: Network interface NamePolicy= disabled on kernel command line.
Oct  8 06:00:59 np0005475493 NetworkManager[44872]: <info>  [1759917659.2615] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  8 06:00:59 np0005475493 NetworkManager[44872]: <info>  [1759917659.2624] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  8 06:00:59 np0005475493 NetworkManager[44872]: <info>  [1759917659.2631] device (podman0): Activation: starting connection 'podman0' (28122bf2-158a-44ac-8889-0997062b69a1)
Oct  8 06:00:59 np0005475493 NetworkManager[44872]: <info>  [1759917659.2632] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  8 06:00:59 np0005475493 NetworkManager[44872]: <info>  [1759917659.2635] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  8 06:00:59 np0005475493 NetworkManager[44872]: <info>  [1759917659.2636] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  8 06:00:59 np0005475493 NetworkManager[44872]: <info>  [1759917659.2638] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  8 06:00:59 np0005475493 podman[225973]: 2025-10-08 10:00:59.175503058 +0000 UTC m=+0.021100097 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct  8 06:00:59 np0005475493 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  8 06:00:59 np0005475493 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  8 06:00:59 np0005475493 NetworkManager[44872]: <info>  [1759917659.2935] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  8 06:00:59 np0005475493 NetworkManager[44872]: <info>  [1759917659.2937] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  8 06:00:59 np0005475493 NetworkManager[44872]: <info>  [1759917659.2945] device (podman0): Activation: successful, device activated.
Oct  8 06:00:59 np0005475493 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct  8 06:00:59 np0005475493 systemd[1]: Started libpod-conmon-2b1fe15183e147954b465d2ebf518218f6d9e3f0fb841efb85368d2acc8b9ad1.scope.
Oct  8 06:00:59 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:00:59 np0005475493 podman[225973]: 2025-10-08 10:00:59.515202936 +0000 UTC m=+0.360799995 container init 2b1fe15183e147954b465d2ebf518218f6d9e3f0fb841efb85368d2acc8b9ad1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  8 06:00:59 np0005475493 podman[225973]: 2025-10-08 10:00:59.530172166 +0000 UTC m=+0.375769185 container start 2b1fe15183e147954b465d2ebf518218f6d9e3f0fb841efb85368d2acc8b9ad1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  8 06:00:59 np0005475493 podman[225973]: 2025-10-08 10:00:59.533939896 +0000 UTC m=+0.379536915 container attach 2b1fe15183e147954b465d2ebf518218f6d9e3f0fb841efb85368d2acc8b9ad1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  8 06:00:59 np0005475493 iscsid_config[226130]: iqn.1994-05.com.redhat:6efeb5c8d262#015
Oct  8 06:00:59 np0005475493 systemd[1]: libpod-2b1fe15183e147954b465d2ebf518218f6d9e3f0fb841efb85368d2acc8b9ad1.scope: Deactivated successfully.
Oct  8 06:00:59 np0005475493 podman[225973]: 2025-10-08 10:00:59.536421115 +0000 UTC m=+0.382018134 container died 2b1fe15183e147954b465d2ebf518218f6d9e3f0fb841efb85368d2acc8b9ad1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct  8 06:00:59 np0005475493 kernel: podman0: port 1(veth0) entered disabled state
Oct  8 06:00:59 np0005475493 kernel: veth0 (unregistering): left allmulticast mode
Oct  8 06:00:59 np0005475493 kernel: veth0 (unregistering): left promiscuous mode
Oct  8 06:00:59 np0005475493 kernel: podman0: port 1(veth0) entered disabled state
Oct  8 06:00:59 np0005475493 NetworkManager[44872]: <info>  [1759917659.6191] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  8 06:00:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:00:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:00:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:00:59.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:00:59 np0005475493 systemd[1]: run-netns-netns\x2d625650a8\x2da513\x2d3820\x2d47cc\x2dab8617c44ed3.mount: Deactivated successfully.
Oct  8 06:00:59 np0005475493 systemd[1]: var-lib-containers-storage-overlay-bfe4243f301f9575f9bc2270fd17d79ef570034ed85d9e4f1414879539b9ce58-merged.mount: Deactivated successfully.
Oct  8 06:00:59 np0005475493 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2b1fe15183e147954b465d2ebf518218f6d9e3f0fb841efb85368d2acc8b9ad1-userdata-shm.mount: Deactivated successfully.
Oct  8 06:01:00 np0005475493 podman[225973]: 2025-10-08 10:01:00.006231074 +0000 UTC m=+0.851828093 container remove 2b1fe15183e147954b465d2ebf518218f6d9e3f0fb841efb85368d2acc8b9ad1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  8 06:01:00 np0005475493 python3.9[225899]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f /usr/sbin/iscsi-iname
Oct  8 06:01:00 np0005475493 systemd[1]: libpod-conmon-2b1fe15183e147954b465d2ebf518218f6d9e3f0fb841efb85368d2acc8b9ad1.scope: Deactivated successfully.
Oct  8 06:01:00 np0005475493 podman[226199]: 2025-10-08 10:01:00.071295348 +0000 UTC m=+0.073222968 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  8 06:01:00 np0005475493 python3.9[225899]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: #012DEPRECATED command:#012It is recommended to use Quadlets for running containers and pods under systemd.#012#012Please refer to podman-systemd.unit(5) for details.#012Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct  8 06:01:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:00.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:00 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v484: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Oct  8 06:01:01 np0005475493 python3.9[226396]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:01:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100101 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 06:01:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100101 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 06:01:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:01:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:01.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:01:01 np0005475493 python3.9[226531]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917660.6810737-317-192478424364046/.source.iscsi _original_basename=._l4d_gi_ follow=False checksum=a8411254db0e7ec3d4d3b5a96191404390dc787f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:01:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:02.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:02 np0005475493 python3.9[226684]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:01:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:01:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:01:02 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v485: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Oct  8 06:01:03 np0005475493 python3.9[226835]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.635158) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917663635240, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 575, "num_deletes": 251, "total_data_size": 726235, "memory_usage": 736352, "flush_reason": "Manual Compaction"}
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917663639935, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 521773, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17397, "largest_seqno": 17971, "table_properties": {"data_size": 518959, "index_size": 786, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7425, "raw_average_key_size": 20, "raw_value_size": 513078, "raw_average_value_size": 1382, "num_data_blocks": 34, "num_entries": 371, "num_filter_entries": 371, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759917627, "oldest_key_time": 1759917627, "file_creation_time": 1759917663, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 4882 microseconds, and 2077 cpu microseconds.
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.640027) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 521773 bytes OK
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.640071) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.641411) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.641426) EVENT_LOG_v1 {"time_micros": 1759917663641421, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.641446) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 723099, prev total WAL file size 723099, number of live WAL files 2.
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.642095) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(509KB)], [35(14MB)]
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917663642150, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 16008825, "oldest_snapshot_seqno": -1}
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4929 keys, 12107805 bytes, temperature: kUnknown
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917663710403, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 12107805, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12074279, "index_size": 20104, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12357, "raw_key_size": 124241, "raw_average_key_size": 25, "raw_value_size": 11984073, "raw_average_value_size": 2431, "num_data_blocks": 835, "num_entries": 4929, "num_filter_entries": 4929, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759917663, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.710695) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 12107805 bytes
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.712105) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 234.3 rd, 177.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 14.8 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(53.9) write-amplify(23.2) OK, records in: 5432, records dropped: 503 output_compression: NoCompression
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.712136) EVENT_LOG_v1 {"time_micros": 1759917663712122, "job": 16, "event": "compaction_finished", "compaction_time_micros": 68337, "compaction_time_cpu_micros": 25149, "output_level": 6, "num_output_files": 1, "total_output_size": 12107805, "num_input_records": 5432, "num_output_records": 4929, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917663712433, "job": 16, "event": "table_file_deletion", "file_number": 37}
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917663717005, "job": 16, "event": "table_file_deletion", "file_number": 35}
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.641982) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.717075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.717081) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.717083) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.717085) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:01:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:01:03.717087) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:01:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:03.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:04.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:04 np0005475493 python3.9[227015]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:01:04 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v486: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Oct  8 06:01:05 np0005475493 python3.9[227167]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:01:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:01:05] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct  8 06:01:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:01:05] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct  8 06:01:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:01:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:05.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:01:06 np0005475493 python3.9[227320]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:01:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:06.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:06 np0005475493 python3.9[227399]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:01:06 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v487: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:01:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:07.050Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:01:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:07.052Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:01:07 np0005475493 python3.9[227551]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:01:07 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Scheduled restart job, restart counter is at 7.
Oct  8 06:01:07 np0005475493 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 06:01:07 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.531s CPU time.
Oct  8 06:01:07 np0005475493 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 06:01:07 np0005475493 podman[227678]: 2025-10-08 10:01:07.54059113 +0000 UTC m=+0.046049486 container create 197b32251d649da9ca1a6e7d1df5a7492995554a85c77530709f1630450feb18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Oct  8 06:01:07 np0005475493 python3.9[227633]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:01:07 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e7cca84133784ef1b75d79f448773b70403e9a746a9cccf658a15d1c5e16e5a/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  8 06:01:07 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e7cca84133784ef1b75d79f448773b70403e9a746a9cccf658a15d1c5e16e5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:01:07 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e7cca84133784ef1b75d79f448773b70403e9a746a9cccf658a15d1c5e16e5a/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:01:07 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e7cca84133784ef1b75d79f448773b70403e9a746a9cccf658a15d1c5e16e5a/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.uynkmx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:01:07 np0005475493 podman[227678]: 2025-10-08 10:01:07.516955313 +0000 UTC m=+0.022413679 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:01:07 np0005475493 podman[227678]: 2025-10-08 10:01:07.614747597 +0000 UTC m=+0.120205963 container init 197b32251d649da9ca1a6e7d1df5a7492995554a85c77530709f1630450feb18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct  8 06:01:07 np0005475493 podman[227678]: 2025-10-08 10:01:07.61920649 +0000 UTC m=+0.124664836 container start 197b32251d649da9ca1a6e7d1df5a7492995554a85c77530709f1630450feb18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:01:07 np0005475493 bash[227678]: 197b32251d649da9ca1a6e7d1df5a7492995554a85c77530709f1630450feb18
Oct  8 06:01:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:07 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  8 06:01:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:07 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  8 06:01:07 np0005475493 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 06:01:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:07 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  8 06:01:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:07 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  8 06:01:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:07 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  8 06:01:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:07 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  8 06:01:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:07 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  8 06:01:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:07 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:01:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:07.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:01:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:08.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:01:08 np0005475493 python3.9[227887]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:01:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:01:08 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v488: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Oct  8 06:01:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:08.918Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:01:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:08.919Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:01:09 np0005475493 python3.9[228039]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:01:09 np0005475493 python3.9[228118]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:01:09 np0005475493 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  8 06:01:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:09.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:10.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:10 np0005475493 python3.9[228271]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:01:10 np0005475493 python3.9[228349]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:01:10 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v489: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:01:11 np0005475493 python3.9[228502]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 06:01:11 np0005475493 systemd[1]: Reloading.
Oct  8 06:01:11 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 06:01:11 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 06:01:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:11.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:12.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:12 np0005475493 python3.9[228691]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:01:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v490: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:01:13 np0005475493 python3.9[228770]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:01:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:01:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:13 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:01:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:13 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:01:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:13.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:14 np0005475493 python3.9[228923]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:01:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:01:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:14.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:01:14 np0005475493 python3.9[229001]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:01:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v491: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Oct  8 06:01:15 np0005475493 python3.9[229154]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 06:01:15 np0005475493 systemd[1]: Reloading.
Oct  8 06:01:15 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 06:01:15 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 06:01:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:01:15] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct  8 06:01:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:01:15] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct  8 06:01:15 np0005475493 systemd[1]: Starting Create netns directory...
Oct  8 06:01:15 np0005475493 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  8 06:01:15 np0005475493 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  8 06:01:15 np0005475493 systemd[1]: Finished Create netns directory.
Oct  8 06:01:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:15.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:01:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:16.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:01:16 np0005475493 python3.9[229348]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:01:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v492: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Oct  8 06:01:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:17.053Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:01:17 np0005475493 python3.9[229501]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:01:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:01:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:01:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:17.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:01:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:01:18 np0005475493 python3.9[229624]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917677.0979683-779-256450014500585/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:01:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:01:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:01:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:01:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:01:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:18.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:01:18 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v493: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 06:01:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:18.919Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:01:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:18.920Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:01:19 np0005475493 python3.9[229777]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  8 06:01:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:19 np0005475493 python3.9[229930]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:01:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:19.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:20.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:20 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab080013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:20 np0005475493 python3.9[230069]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917679.4397097-854-162111122523198/.source.json _original_basename=.kx3demn1 follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:01:20 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v494: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct  8 06:01:21 np0005475493 python3.9[230221]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:01:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:21 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:21 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:01:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:21.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:01:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:01:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:22.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:01:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:22 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:22 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v495: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct  8 06:01:23 np0005475493 podman[230623]: 2025-10-08 10:01:23.29591727 +0000 UTC m=+0.089705046 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  8 06:01:23 np0005475493 python3.9[230669]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct  8 06:01:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:01:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100123 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 06:01:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:23 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08002090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:23 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:01:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:23.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:01:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:01:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:24.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:01:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:24 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:24 np0005475493 python3.9[230854]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  8 06:01:24 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v496: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct  8 06:01:25 np0005475493 python3.9[231007]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  8 06:01:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:25 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:01:25] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:01:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:01:25] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:01:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:25 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08002090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:25.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:26.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:26 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v497: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:01:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:27.054Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:01:27 np0005475493 python3[231188]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  8 06:01:27 np0005475493 podman[231222]: 2025-10-08 10:01:27.662655464 +0000 UTC m=+0.051688228 container create 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  8 06:01:27 np0005475493 podman[231222]: 2025-10-08 10:01:27.634374877 +0000 UTC m=+0.023407651 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct  8 06:01:27 np0005475493 python3[231188]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct  8 06:01:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:27 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:27 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:27.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:28.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:28 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:28 np0005475493 python3.9[231413]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 06:01:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:01:28 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v498: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:01:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:28.920Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:01:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:28.920Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:01:29 np0005475493 python3.9[231568]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:01:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100129 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 06:01:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:29 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:29 np0005475493 python3.9[231644]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 06:01:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:29 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:01:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:29.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:01:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:01:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:30.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:01:30 np0005475493 podman[231768]: 2025-10-08 10:01:30.296460037 +0000 UTC m=+0.057549646 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct  8 06:01:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:30 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08003130 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:30 np0005475493 python3.9[231815]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759917689.8571365-1118-59160507736162/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:01:30 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v499: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct  8 06:01:31 np0005475493 python3.9[231891]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  8 06:01:31 np0005475493 systemd[1]: Reloading.
Oct  8 06:01:31 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 06:01:31 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 06:01:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:31 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:31 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:31.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:31 np0005475493 python3.9[232002]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 06:01:31 np0005475493 systemd[1]: Reloading.
Oct  8 06:01:32 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 06:01:32 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 06:01:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:32.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:32 np0005475493 systemd[1]: Starting iscsid container...
Oct  8 06:01:32 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:01:32 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6105164ba49c7ca6d62d445483671ab45469dd2e81be8bb63cfa0d1309aeea3/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  8 06:01:32 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6105164ba49c7ca6d62d445483671ab45469dd2e81be8bb63cfa0d1309aeea3/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct  8 06:01:32 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6105164ba49c7ca6d62d445483671ab45469dd2e81be8bb63cfa0d1309aeea3/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  8 06:01:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:32 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:32 np0005475493 systemd[1]: Started /usr/bin/podman healthcheck run 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3.
Oct  8 06:01:32 np0005475493 podman[232042]: 2025-10-08 10:01:32.419584704 +0000 UTC m=+0.126289818 container init 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  8 06:01:32 np0005475493 iscsid[232058]: + sudo -E kolla_set_configs
Oct  8 06:01:32 np0005475493 podman[232042]: 2025-10-08 10:01:32.446901969 +0000 UTC m=+0.153607073 container start 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  8 06:01:32 np0005475493 podman[232042]: iscsid
Oct  8 06:01:32 np0005475493 systemd[1]: Created slice User Slice of UID 0.
Oct  8 06:01:32 np0005475493 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct  8 06:01:32 np0005475493 systemd[1]: Started iscsid container.
Oct  8 06:01:32 np0005475493 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct  8 06:01:32 np0005475493 systemd[1]: Starting User Manager for UID 0...
Oct  8 06:01:32 np0005475493 podman[232065]: 2025-10-08 10:01:32.54270085 +0000 UTC m=+0.076523384 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3)
Oct  8 06:01:32 np0005475493 systemd[1]: 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3-5d80cdf7f9c7286c.service: Main process exited, code=exited, status=1/FAILURE
Oct  8 06:01:32 np0005475493 systemd[1]: 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3-5d80cdf7f9c7286c.service: Failed with result 'exit-code'.
Oct  8 06:01:32 np0005475493 systemd[232080]: Queued start job for default target Main User Target.
Oct  8 06:01:32 np0005475493 systemd[232080]: Created slice User Application Slice.
Oct  8 06:01:32 np0005475493 systemd[232080]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct  8 06:01:32 np0005475493 systemd[232080]: Started Daily Cleanup of User's Temporary Directories.
Oct  8 06:01:32 np0005475493 systemd[232080]: Reached target Paths.
Oct  8 06:01:32 np0005475493 systemd[232080]: Reached target Timers.
Oct  8 06:01:32 np0005475493 systemd[232080]: Starting D-Bus User Message Bus Socket...
Oct  8 06:01:32 np0005475493 systemd[232080]: Starting Create User's Volatile Files and Directories...
Oct  8 06:01:32 np0005475493 systemd[232080]: Listening on D-Bus User Message Bus Socket.
Oct  8 06:01:32 np0005475493 systemd[232080]: Reached target Sockets.
Oct  8 06:01:32 np0005475493 systemd[232080]: Finished Create User's Volatile Files and Directories.
Oct  8 06:01:32 np0005475493 systemd[232080]: Reached target Basic System.
Oct  8 06:01:32 np0005475493 systemd[232080]: Reached target Main User Target.
Oct  8 06:01:32 np0005475493 systemd[232080]: Startup finished in 165ms.
Oct  8 06:01:32 np0005475493 systemd[1]: Started User Manager for UID 0.
Oct  8 06:01:32 np0005475493 systemd[1]: Started Session c3 of User root.
Oct  8 06:01:32 np0005475493 iscsid[232058]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  8 06:01:32 np0005475493 iscsid[232058]: INFO:__main__:Validating config file
Oct  8 06:01:32 np0005475493 iscsid[232058]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  8 06:01:32 np0005475493 iscsid[232058]: INFO:__main__:Writing out command to execute
Oct  8 06:01:32 np0005475493 systemd[1]: session-c3.scope: Deactivated successfully.
Oct  8 06:01:32 np0005475493 iscsid[232058]: ++ cat /run_command
Oct  8 06:01:32 np0005475493 iscsid[232058]: + CMD='/usr/sbin/iscsid -f'
Oct  8 06:01:32 np0005475493 iscsid[232058]: + ARGS=
Oct  8 06:01:32 np0005475493 iscsid[232058]: + sudo kolla_copy_cacerts
Oct  8 06:01:32 np0005475493 systemd[1]: Started Session c4 of User root.
Oct  8 06:01:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:01:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:01:32 np0005475493 systemd[1]: session-c4.scope: Deactivated successfully.
Oct  8 06:01:32 np0005475493 iscsid[232058]: + [[ ! -n '' ]]
Oct  8 06:01:32 np0005475493 iscsid[232058]: + . kolla_extend_start
Oct  8 06:01:32 np0005475493 iscsid[232058]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct  8 06:01:32 np0005475493 iscsid[232058]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct  8 06:01:32 np0005475493 iscsid[232058]: Running command: '/usr/sbin/iscsid -f'
Oct  8 06:01:32 np0005475493 iscsid[232058]: + umask 0022
Oct  8 06:01:32 np0005475493 iscsid[232058]: + exec /usr/sbin/iscsid -f
Oct  8 06:01:32 np0005475493 kernel: Loading iSCSI transport class v2.0-870.
Oct  8 06:01:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v500: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct  8 06:01:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:01:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:33 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08003130 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:33 np0005475493 python3.9[232264]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 06:01:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:33 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08003130 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:33.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:01:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:34.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:01:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:34 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:34 np0005475493 python3.9[232417]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:01:34 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v501: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct  8 06:01:35 np0005475493 python3.9[232570]: ansible-ansible.builtin.service_facts Invoked
Oct  8 06:01:35 np0005475493 network[232587]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  8 06:01:35 np0005475493 network[232588]: 'network-scripts' will be removed from distribution in near future.
Oct  8 06:01:35 np0005475493 network[232589]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  8 06:01:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:35 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:01:35] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:01:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:01:35] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:01:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:35 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:01:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:35.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:01:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:01:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:36.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:01:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:36 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08003130 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:36 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v502: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 06:01:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:37.056Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:01:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:37 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:37 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:37.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:38.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:38 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:38 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:01:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:01:38 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v503: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Oct  8 06:01:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:38.922Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:01:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:39 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08004620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:39 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:39.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:40.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:40 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v504: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Oct  8 06:01:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:41 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:01:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:41 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:01:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:41 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:41 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08004620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:01:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:41.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:01:42 np0005475493 python3.9[232870]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  8 06:01:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:01:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:42.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:01:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:42 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v505: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Oct  8 06:01:42 np0005475493 python3.9[233023]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct  8 06:01:43 np0005475493 systemd[1]: Stopping User Manager for UID 0...
Oct  8 06:01:43 np0005475493 systemd[232080]: Activating special unit Exit the Session...
Oct  8 06:01:43 np0005475493 systemd[232080]: Stopped target Main User Target.
Oct  8 06:01:43 np0005475493 systemd[232080]: Stopped target Basic System.
Oct  8 06:01:43 np0005475493 systemd[232080]: Stopped target Paths.
Oct  8 06:01:43 np0005475493 systemd[232080]: Stopped target Sockets.
Oct  8 06:01:43 np0005475493 systemd[232080]: Stopped target Timers.
Oct  8 06:01:43 np0005475493 systemd[232080]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  8 06:01:43 np0005475493 systemd[232080]: Closed D-Bus User Message Bus Socket.
Oct  8 06:01:43 np0005475493 systemd[232080]: Stopped Create User's Volatile Files and Directories.
Oct  8 06:01:43 np0005475493 systemd[232080]: Removed slice User Application Slice.
Oct  8 06:01:43 np0005475493 systemd[232080]: Reached target Shutdown.
Oct  8 06:01:43 np0005475493 systemd[232080]: Finished Exit the Session.
Oct  8 06:01:43 np0005475493 systemd[232080]: Reached target Exit the Session.
Oct  8 06:01:43 np0005475493 systemd[1]: user@0.service: Deactivated successfully.
Oct  8 06:01:43 np0005475493 systemd[1]: Stopped User Manager for UID 0.
Oct  8 06:01:43 np0005475493 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct  8 06:01:43 np0005475493 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct  8 06:01:43 np0005475493 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct  8 06:01:43 np0005475493 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct  8 06:01:43 np0005475493 systemd[1]: Removed slice User Slice of UID 0.
Oct  8 06:01:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:01:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:43 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:43 np0005475493 python3.9[233182]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:01:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:43 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:43.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:01:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:44.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:01:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:44 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08004620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:44 np0005475493 python3.9[233331]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917703.2489626-1340-80584199922098/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:01:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:44 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 06:01:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v506: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 06:01:45 np0005475493 python3.9[233484]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:01:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:01:45] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:01:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:01:45] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:01:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:45 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:45 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:01:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:45.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:01:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:46.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:46 np0005475493 python3.9[233662]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  8 06:01:46 np0005475493 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct  8 06:01:46 np0005475493 systemd[1]: Stopped Load Kernel Modules.
Oct  8 06:01:46 np0005475493 systemd[1]: Stopping Load Kernel Modules...
Oct  8 06:01:46 np0005475493 systemd[1]: Starting Load Kernel Modules...
Oct  8 06:01:46 np0005475493 systemd[1]: Finished Load Kernel Modules.
Oct  8 06:01:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:46 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:46 np0005475493 podman[233787]: 2025-10-08 10:01:46.602302542 +0000 UTC m=+0.073512887 container exec 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:01:46 np0005475493 podman[233787]: 2025-10-08 10:01:46.694058793 +0000 UTC m=+0.165269148 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:01:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v507: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 06:01:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:47.058Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:01:47 np0005475493 python3.9[233983]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:01:47 np0005475493 podman[234077]: 2025-10-08 10:01:47.282188773 +0000 UTC m=+0.058980872 container exec 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 06:01:47 np0005475493 podman[234077]: 2025-10-08 10:01:47.317567126 +0000 UTC m=+0.094359235 container exec_died 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 06:01:47 np0005475493 podman[234220]: 2025-10-08 10:01:47.60720869 +0000 UTC m=+0.070727098 container exec 197b32251d649da9ca1a6e7d1df5a7492995554a85c77530709f1630450feb18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  8 06:01:47 np0005475493 podman[234220]: 2025-10-08 10:01:47.64280356 +0000 UTC m=+0.106322038 container exec_died 197b32251d649da9ca1a6e7d1df5a7492995554a85c77530709f1630450feb18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:01:47
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['default.rgw.log', '.nfs', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'volumes', 'backups', 'cephfs.cephfs.meta', 'vms', '.rgw.root']
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:01:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:47 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08004620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:01:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:01:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:47 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:47 np0005475493 podman[234341]: 2025-10-08 10:01:47.888074761 +0000 UTC m=+0.072373821 container exec 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:01:47 np0005475493 podman[234341]: 2025-10-08 10:01:47.900434027 +0000 UTC m=+0.084733067 container exec_died 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct  8 06:01:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:01:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:47.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:01:47 np0005475493 python3.9[234319]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:01:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:01:48 np0005475493 podman[234432]: 2025-10-08 10:01:48.09425556 +0000 UTC m=+0.046098659 container exec 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, io.openshift.tags=Ceph keepalived, name=keepalived, vcs-type=git, release=1793, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Oct  8 06:01:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:01:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:01:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:01:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:01:48 np0005475493 podman[234432]: 2025-10-08 10:01:48.172671813 +0000 UTC m=+0.124514882 container exec_died 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, release=1793, build-date=2023-02-22T09:23:20, distribution-scope=public, io.openshift.expose-services=, name=keepalived, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc.)
Oct  8 06:01:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:01:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:01:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:01:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:01:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:01:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:48.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:48 np0005475493 podman[234560]: 2025-10-08 10:01:48.380254965 +0000 UTC m=+0.062493393 container exec feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 06:01:48 np0005475493 podman[234560]: 2025-10-08 10:01:48.410503965 +0000 UTC m=+0.092742383 container exec_died feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 06:01:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:48 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:48 np0005475493 podman[234698]: 2025-10-08 10:01:48.623418089 +0000 UTC m=+0.051012136 container exec 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 06:01:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:01:48 np0005475493 python3.9[234677]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 06:01:48 np0005475493 podman[234698]: 2025-10-08 10:01:48.781151054 +0000 UTC m=+0.208745111 container exec_died 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 06:01:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v508: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 06:01:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:48.923Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:01:49 np0005475493 podman[234912]: 2025-10-08 10:01:49.172043943 +0000 UTC m=+0.056975598 container exec 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 06:01:49 np0005475493 podman[234912]: 2025-10-08 10:01:49.203722857 +0000 UTC m=+0.088654502 container exec_died 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 06:01:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:01:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:01:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:01:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:01:49 np0005475493 python3.9[235006]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:01:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:49 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:49 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08004620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:01:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:01:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:01:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:01:49 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v509: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 465 B/s wr, 2 op/s
Oct  8 06:01:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:01:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:01:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:01:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:01:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:01:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:49.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:01:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:01:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:01:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:01:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:01:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:01:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:01:49 np0005475493 python3.9[235198]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917708.963324-1514-262657878425471/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:01:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:50.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:50 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:01:50 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:01:50 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:01:50 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:01:50 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:01:50 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:01:50 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Oct  8 06:01:50 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  8 06:01:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:50 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:50 np0005475493 podman[235329]: 2025-10-08 10:01:50.518564669 +0000 UTC m=+0.062477564 container create f735babec46bed41943a0dd079c7a30f0f8c523846c91206226d1e99db3160b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_galois, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:01:50 np0005475493 systemd[1]: Started libpod-conmon-f735babec46bed41943a0dd079c7a30f0f8c523846c91206226d1e99db3160b7.scope.
Oct  8 06:01:50 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:01:50 np0005475493 podman[235329]: 2025-10-08 10:01:50.495098566 +0000 UTC m=+0.039011551 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:01:50 np0005475493 podman[235329]: 2025-10-08 10:01:50.596229078 +0000 UTC m=+0.140141993 container init f735babec46bed41943a0dd079c7a30f0f8c523846c91206226d1e99db3160b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  8 06:01:50 np0005475493 podman[235329]: 2025-10-08 10:01:50.603251902 +0000 UTC m=+0.147164797 container start f735babec46bed41943a0dd079c7a30f0f8c523846c91206226d1e99db3160b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_galois, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  8 06:01:50 np0005475493 adoring_galois[235368]: 167 167
Oct  8 06:01:50 np0005475493 systemd[1]: libpod-f735babec46bed41943a0dd079c7a30f0f8c523846c91206226d1e99db3160b7.scope: Deactivated successfully.
Oct  8 06:01:50 np0005475493 podman[235329]: 2025-10-08 10:01:50.609907606 +0000 UTC m=+0.153820581 container attach f735babec46bed41943a0dd079c7a30f0f8c523846c91206226d1e99db3160b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_galois, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:01:50 np0005475493 podman[235329]: 2025-10-08 10:01:50.610412922 +0000 UTC m=+0.154325817 container died f735babec46bed41943a0dd079c7a30f0f8c523846c91206226d1e99db3160b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_galois, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:01:50 np0005475493 systemd[1]: var-lib-containers-storage-overlay-a871b92c64683626e6ecc0ab96aff298af7fd416fbe9f540d6df6c652922bcd3-merged.mount: Deactivated successfully.
Oct  8 06:01:50 np0005475493 podman[235329]: 2025-10-08 10:01:50.656656144 +0000 UTC m=+0.200569039 container remove f735babec46bed41943a0dd079c7a30f0f8c523846c91206226d1e99db3160b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_galois, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:01:50 np0005475493 systemd[1]: libpod-conmon-f735babec46bed41943a0dd079c7a30f0f8c523846c91206226d1e99db3160b7.scope: Deactivated successfully.
Oct  8 06:01:50 np0005475493 podman[235423]: 2025-10-08 10:01:50.814747181 +0000 UTC m=+0.051094829 container create 369632d29074bc16f3363972d8b5d71a8f119a6af4ad83e2dff2d953a41232ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:01:50 np0005475493 systemd[1]: Started libpod-conmon-369632d29074bc16f3363972d8b5d71a8f119a6af4ad83e2dff2d953a41232ae.scope.
Oct  8 06:01:50 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:01:50 np0005475493 podman[235423]: 2025-10-08 10:01:50.791447234 +0000 UTC m=+0.027794902 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:01:50 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff25b82e7313c43ac903bd987a355cdde1c9d26358f86bbc04251a957e13d13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:01:50 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff25b82e7313c43ac903bd987a355cdde1c9d26358f86bbc04251a957e13d13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:01:50 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff25b82e7313c43ac903bd987a355cdde1c9d26358f86bbc04251a957e13d13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:01:50 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff25b82e7313c43ac903bd987a355cdde1c9d26358f86bbc04251a957e13d13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:01:50 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff25b82e7313c43ac903bd987a355cdde1c9d26358f86bbc04251a957e13d13/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:01:50 np0005475493 podman[235423]: 2025-10-08 10:01:50.915017775 +0000 UTC m=+0.151365433 container init 369632d29074bc16f3363972d8b5d71a8f119a6af4ad83e2dff2d953a41232ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True)
Oct  8 06:01:50 np0005475493 podman[235423]: 2025-10-08 10:01:50.922493844 +0000 UTC m=+0.158841482 container start 369632d29074bc16f3363972d8b5d71a8f119a6af4ad83e2dff2d953a41232ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_montalcini, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  8 06:01:50 np0005475493 podman[235423]: 2025-10-08 10:01:50.925678176 +0000 UTC m=+0.162025844 container attach 369632d29074bc16f3363972d8b5d71a8f119a6af4ad83e2dff2d953a41232ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:01:51 np0005475493 python3.9[235519]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 06:01:51 np0005475493 gifted_montalcini[235467]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:01:51 np0005475493 gifted_montalcini[235467]: --> All data devices are unavailable
Oct  8 06:01:51 np0005475493 systemd[1]: libpod-369632d29074bc16f3363972d8b5d71a8f119a6af4ad83e2dff2d953a41232ae.scope: Deactivated successfully.
Oct  8 06:01:51 np0005475493 podman[235423]: 2025-10-08 10:01:51.272846943 +0000 UTC m=+0.509194591 container died 369632d29074bc16f3363972d8b5d71a8f119a6af4ad83e2dff2d953a41232ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_montalcini, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:01:51 np0005475493 systemd[1]: var-lib-containers-storage-overlay-cff25b82e7313c43ac903bd987a355cdde1c9d26358f86bbc04251a957e13d13-merged.mount: Deactivated successfully.
Oct  8 06:01:51 np0005475493 ceph-mon[73572]: Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Oct  8 06:01:51 np0005475493 ceph-mon[73572]: Cluster is now healthy
Oct  8 06:01:51 np0005475493 podman[235423]: 2025-10-08 10:01:51.321227204 +0000 UTC m=+0.557574842 container remove 369632d29074bc16f3363972d8b5d71a8f119a6af4ad83e2dff2d953a41232ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Oct  8 06:01:51 np0005475493 systemd[1]: libpod-conmon-369632d29074bc16f3363972d8b5d71a8f119a6af4ad83e2dff2d953a41232ae.scope: Deactivated successfully.
Oct  8 06:01:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100151 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 06:01:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:51 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:51 np0005475493 python3.9[235744]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:01:51 np0005475493 podman[235789]: 2025-10-08 10:01:51.83653036 +0000 UTC m=+0.038208456 container create f2cf00980d208870b0ffa9d4357b0ad542e82c6eee1b941d1d289cb798100f65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct  8 06:01:51 np0005475493 systemd[1]: Started libpod-conmon-f2cf00980d208870b0ffa9d4357b0ad542e82c6eee1b941d1d289cb798100f65.scope.
Oct  8 06:01:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:51 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:51 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v510: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 465 B/s wr, 2 op/s
Oct  8 06:01:51 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:01:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:51.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:51 np0005475493 podman[235789]: 2025-10-08 10:01:51.820723083 +0000 UTC m=+0.022401189 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:01:51 np0005475493 podman[235789]: 2025-10-08 10:01:51.916789512 +0000 UTC m=+0.118467628 container init f2cf00980d208870b0ffa9d4357b0ad542e82c6eee1b941d1d289cb798100f65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  8 06:01:51 np0005475493 podman[235789]: 2025-10-08 10:01:51.924053045 +0000 UTC m=+0.125731151 container start f2cf00980d208870b0ffa9d4357b0ad542e82c6eee1b941d1d289cb798100f65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_ride, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  8 06:01:51 np0005475493 relaxed_ride[235818]: 167 167
Oct  8 06:01:51 np0005475493 podman[235789]: 2025-10-08 10:01:51.928968512 +0000 UTC m=+0.130646628 container attach f2cf00980d208870b0ffa9d4357b0ad542e82c6eee1b941d1d289cb798100f65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:01:51 np0005475493 systemd[1]: libpod-f2cf00980d208870b0ffa9d4357b0ad542e82c6eee1b941d1d289cb798100f65.scope: Deactivated successfully.
Oct  8 06:01:51 np0005475493 podman[235789]: 2025-10-08 10:01:51.930715578 +0000 UTC m=+0.132393674 container died f2cf00980d208870b0ffa9d4357b0ad542e82c6eee1b941d1d289cb798100f65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_ride, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:01:51 np0005475493 systemd[1]: var-lib-containers-storage-overlay-b52cac6e23e54a15ef15e27f8bdfcd08abb90a89af9cf4868ab4c25f7d03e306-merged.mount: Deactivated successfully.
Oct  8 06:01:51 np0005475493 podman[235789]: 2025-10-08 10:01:51.973132847 +0000 UTC m=+0.174810933 container remove f2cf00980d208870b0ffa9d4357b0ad542e82c6eee1b941d1d289cb798100f65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_ride, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:01:51 np0005475493 systemd[1]: libpod-conmon-f2cf00980d208870b0ffa9d4357b0ad542e82c6eee1b941d1d289cb798100f65.scope: Deactivated successfully.
Oct  8 06:01:52 np0005475493 podman[235876]: 2025-10-08 10:01:52.119967714 +0000 UTC m=+0.039408124 container create b988ed17cacd25a74b3e08ece301942943506a007b35e010d2854d3f78f2325c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_villani, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  8 06:01:52 np0005475493 systemd[1]: Started libpod-conmon-b988ed17cacd25a74b3e08ece301942943506a007b35e010d2854d3f78f2325c.scope.
Oct  8 06:01:52 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:01:52 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4045e595090949fbc0d044e65b9b49b6d965bafa5fc8a3931ca3765a2f295e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:01:52 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4045e595090949fbc0d044e65b9b49b6d965bafa5fc8a3931ca3765a2f295e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:01:52 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4045e595090949fbc0d044e65b9b49b6d965bafa5fc8a3931ca3765a2f295e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:01:52 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4045e595090949fbc0d044e65b9b49b6d965bafa5fc8a3931ca3765a2f295e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:01:52 np0005475493 podman[235876]: 2025-10-08 10:01:52.195763423 +0000 UTC m=+0.115203853 container init b988ed17cacd25a74b3e08ece301942943506a007b35e010d2854d3f78f2325c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_villani, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  8 06:01:52 np0005475493 podman[235876]: 2025-10-08 10:01:52.104010402 +0000 UTC m=+0.023450832 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:01:52 np0005475493 podman[235876]: 2025-10-08 10:01:52.202160368 +0000 UTC m=+0.121600788 container start b988ed17cacd25a74b3e08ece301942943506a007b35e010d2854d3f78f2325c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_villani, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  8 06:01:52 np0005475493 podman[235876]: 2025-10-08 10:01:52.20598567 +0000 UTC m=+0.125426100 container attach b988ed17cacd25a74b3e08ece301942943506a007b35e010d2854d3f78f2325c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_villani, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct  8 06:01:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:52.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:52 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:52 np0005475493 determined_villani[235922]: {
Oct  8 06:01:52 np0005475493 determined_villani[235922]:    "1": [
Oct  8 06:01:52 np0005475493 determined_villani[235922]:        {
Oct  8 06:01:52 np0005475493 determined_villani[235922]:            "devices": [
Oct  8 06:01:52 np0005475493 determined_villani[235922]:                "/dev/loop3"
Oct  8 06:01:52 np0005475493 determined_villani[235922]:            ],
Oct  8 06:01:52 np0005475493 determined_villani[235922]:            "lv_name": "ceph_lv0",
Oct  8 06:01:52 np0005475493 determined_villani[235922]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:01:52 np0005475493 determined_villani[235922]:            "lv_size": "21470642176",
Oct  8 06:01:52 np0005475493 determined_villani[235922]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:01:52 np0005475493 determined_villani[235922]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:01:52 np0005475493 determined_villani[235922]:            "name": "ceph_lv0",
Oct  8 06:01:52 np0005475493 determined_villani[235922]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:01:52 np0005475493 determined_villani[235922]:            "tags": {
Oct  8 06:01:52 np0005475493 determined_villani[235922]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:01:52 np0005475493 determined_villani[235922]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:01:52 np0005475493 determined_villani[235922]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:01:52 np0005475493 determined_villani[235922]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:01:52 np0005475493 determined_villani[235922]:                "ceph.cluster_name": "ceph",
Oct  8 06:01:52 np0005475493 determined_villani[235922]:                "ceph.crush_device_class": "",
Oct  8 06:01:52 np0005475493 determined_villani[235922]:                "ceph.encrypted": "0",
Oct  8 06:01:52 np0005475493 determined_villani[235922]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:01:52 np0005475493 determined_villani[235922]:                "ceph.osd_id": "1",
Oct  8 06:01:52 np0005475493 determined_villani[235922]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:01:52 np0005475493 determined_villani[235922]:                "ceph.type": "block",
Oct  8 06:01:52 np0005475493 determined_villani[235922]:                "ceph.vdo": "0",
Oct  8 06:01:52 np0005475493 determined_villani[235922]:                "ceph.with_tpm": "0"
Oct  8 06:01:52 np0005475493 determined_villani[235922]:            },
Oct  8 06:01:52 np0005475493 determined_villani[235922]:            "type": "block",
Oct  8 06:01:52 np0005475493 determined_villani[235922]:            "vg_name": "ceph_vg0"
Oct  8 06:01:52 np0005475493 determined_villani[235922]:        }
Oct  8 06:01:52 np0005475493 determined_villani[235922]:    ]
Oct  8 06:01:52 np0005475493 determined_villani[235922]: }
Oct  8 06:01:52 np0005475493 systemd[1]: libpod-b988ed17cacd25a74b3e08ece301942943506a007b35e010d2854d3f78f2325c.scope: Deactivated successfully.
Oct  8 06:01:52 np0005475493 conmon[235922]: conmon b988ed17cacd25a74b3e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b988ed17cacd25a74b3e08ece301942943506a007b35e010d2854d3f78f2325c.scope/container/memory.events
Oct  8 06:01:52 np0005475493 podman[236005]: 2025-10-08 10:01:52.55665972 +0000 UTC m=+0.024330852 container died b988ed17cacd25a74b3e08ece301942943506a007b35e010d2854d3f78f2325c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_villani, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  8 06:01:52 np0005475493 systemd[1]: var-lib-containers-storage-overlay-f4045e595090949fbc0d044e65b9b49b6d965bafa5fc8a3931ca3765a2f295e4-merged.mount: Deactivated successfully.
Oct  8 06:01:52 np0005475493 podman[236005]: 2025-10-08 10:01:52.600161334 +0000 UTC m=+0.067832456 container remove b988ed17cacd25a74b3e08ece301942943506a007b35e010d2854d3f78f2325c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_villani, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:01:52 np0005475493 systemd[1]: libpod-conmon-b988ed17cacd25a74b3e08ece301942943506a007b35e010d2854d3f78f2325c.scope: Deactivated successfully.
Oct  8 06:01:52 np0005475493 python3.9[236012]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:01:53 np0005475493 podman[236232]: 2025-10-08 10:01:53.184679727 +0000 UTC m=+0.038930598 container create b07fd854611db22bad9e41d4fab8b1b12dd528b3f02554bace351819e41a02a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_hermann, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  8 06:01:53 np0005475493 systemd[1]: Started libpod-conmon-b07fd854611db22bad9e41d4fab8b1b12dd528b3f02554bace351819e41a02a0.scope.
Oct  8 06:01:53 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:01:53 np0005475493 podman[236232]: 2025-10-08 10:01:53.253980209 +0000 UTC m=+0.108231120 container init b07fd854611db22bad9e41d4fab8b1b12dd528b3f02554bace351819e41a02a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_hermann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:01:53 np0005475493 podman[236232]: 2025-10-08 10:01:53.262081499 +0000 UTC m=+0.116332360 container start b07fd854611db22bad9e41d4fab8b1b12dd528b3f02554bace351819e41a02a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_hermann, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:01:53 np0005475493 podman[236232]: 2025-10-08 10:01:53.166784954 +0000 UTC m=+0.021035825 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:01:53 np0005475493 podman[236232]: 2025-10-08 10:01:53.266019964 +0000 UTC m=+0.120270885 container attach b07fd854611db22bad9e41d4fab8b1b12dd528b3f02554bace351819e41a02a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:01:53 np0005475493 infallible_hermann[236279]: 167 167
Oct  8 06:01:53 np0005475493 systemd[1]: libpod-b07fd854611db22bad9e41d4fab8b1b12dd528b3f02554bace351819e41a02a0.scope: Deactivated successfully.
Oct  8 06:01:53 np0005475493 podman[236232]: 2025-10-08 10:01:53.268778703 +0000 UTC m=+0.123029564 container died b07fd854611db22bad9e41d4fab8b1b12dd528b3f02554bace351819e41a02a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_hermann, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:01:53 np0005475493 systemd[1]: var-lib-containers-storage-overlay-ba729483357026e0165e847409d59cd109b38652e72144f1a7989a3a779f1e72-merged.mount: Deactivated successfully.
Oct  8 06:01:53 np0005475493 podman[236232]: 2025-10-08 10:01:53.304527309 +0000 UTC m=+0.158778190 container remove b07fd854611db22bad9e41d4fab8b1b12dd528b3f02554bace351819e41a02a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_hermann, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:01:53 np0005475493 systemd[1]: libpod-conmon-b07fd854611db22bad9e41d4fab8b1b12dd528b3f02554bace351819e41a02a0.scope: Deactivated successfully.
Oct  8 06:01:53 np0005475493 podman[236299]: 2025-10-08 10:01:53.44249521 +0000 UTC m=+0.089385745 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  8 06:01:53 np0005475493 python3.9[236283]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:01:53 np0005475493 podman[236325]: 2025-10-08 10:01:53.473119412 +0000 UTC m=+0.050276222 container create 506db276724e2142e3cea81345a114918c4c84cff1eeef0232ced7f738acd03f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_booth, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  8 06:01:53 np0005475493 systemd[1]: Started libpod-conmon-506db276724e2142e3cea81345a114918c4c84cff1eeef0232ced7f738acd03f.scope.
Oct  8 06:01:53 np0005475493 podman[236325]: 2025-10-08 10:01:53.453174352 +0000 UTC m=+0.030331172 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:01:53 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:01:53 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b51617eb92ec3e26aa6266b234a05bb270a23fd6eb47f62e7fddd05278a085c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:01:53 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b51617eb92ec3e26aa6266b234a05bb270a23fd6eb47f62e7fddd05278a085c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:01:53 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b51617eb92ec3e26aa6266b234a05bb270a23fd6eb47f62e7fddd05278a085c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:01:53 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b51617eb92ec3e26aa6266b234a05bb270a23fd6eb47f62e7fddd05278a085c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:01:53 np0005475493 podman[236325]: 2025-10-08 10:01:53.56166241 +0000 UTC m=+0.138819200 container init 506db276724e2142e3cea81345a114918c4c84cff1eeef0232ced7f738acd03f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  8 06:01:53 np0005475493 podman[236325]: 2025-10-08 10:01:53.569477361 +0000 UTC m=+0.146634131 container start 506db276724e2142e3cea81345a114918c4c84cff1eeef0232ced7f738acd03f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:01:53 np0005475493 podman[236325]: 2025-10-08 10:01:53.574268064 +0000 UTC m=+0.151424834 container attach 506db276724e2142e3cea81345a114918c4c84cff1eeef0232ced7f738acd03f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:01:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:01:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:53 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:53 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:53 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v511: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 465 B/s wr, 2 op/s
Oct  8 06:01:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:01:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:53.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:01:54 np0005475493 lvm[236573]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:01:54 np0005475493 lvm[236573]: VG ceph_vg0 finished
Oct  8 06:01:54 np0005475493 recursing_booth[236370]: {}
Oct  8 06:01:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:54.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:54 np0005475493 systemd[1]: libpod-506db276724e2142e3cea81345a114918c4c84cff1eeef0232ced7f738acd03f.scope: Deactivated successfully.
Oct  8 06:01:54 np0005475493 podman[236325]: 2025-10-08 10:01:54.298629149 +0000 UTC m=+0.875785929 container died 506db276724e2142e3cea81345a114918c4c84cff1eeef0232ced7f738acd03f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_booth, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:01:54 np0005475493 systemd[1]: libpod-506db276724e2142e3cea81345a114918c4c84cff1eeef0232ced7f738acd03f.scope: Consumed 1.146s CPU time.
Oct  8 06:01:54 np0005475493 systemd[1]: var-lib-containers-storage-overlay-3b51617eb92ec3e26aa6266b234a05bb270a23fd6eb47f62e7fddd05278a085c-merged.mount: Deactivated successfully.
Oct  8 06:01:54 np0005475493 podman[236325]: 2025-10-08 10:01:54.339751098 +0000 UTC m=+0.916907868 container remove 506db276724e2142e3cea81345a114918c4c84cff1eeef0232ced7f738acd03f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_booth, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:01:54 np0005475493 systemd[1]: libpod-conmon-506db276724e2142e3cea81345a114918c4c84cff1eeef0232ced7f738acd03f.scope: Deactivated successfully.
Oct  8 06:01:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:01:54 np0005475493 python3.9[236574]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:01:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:01:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:01:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:01:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:54 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:54 np0005475493 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct  8 06:01:54 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:01:54 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:01:55 np0005475493 python3.9[236767]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:01:55 np0005475493 python3.9[236920]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:01:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:01:55] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct  8 06:01:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:01:55] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct  8 06:01:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:55 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c001d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:55 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:55 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v512: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 279 B/s rd, 93 B/s wr, 0 op/s
Oct  8 06:01:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:01:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:55.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:01:55 np0005475493 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct  8 06:01:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:01:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:56.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:01:56 np0005475493 python3.9[237074]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:01:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:56 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:57.058Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:01:57 np0005475493 python3.9[237226]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 06:01:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:01:57.399 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:01:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:01:57.400 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:01:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:01:57.400 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:01:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:57 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:57 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c001d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:57 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v513: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 279 B/s rd, 93 B/s wr, 0 op/s
Oct  8 06:01:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:01:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:57.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:01:58 np0005475493 python3.9[237381]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:01:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:01:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:01:58.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:01:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:58 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:01:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:01:58.924Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:01:58 np0005475493 python3.9[237534]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:01:59 np0005475493 python3.9[237687]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:01:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:59 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:59 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v514: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 279 B/s rd, 0 op/s
Oct  8 06:01:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:01:59 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:01:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:01:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:01:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:01:59.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:02:00 np0005475493 python3.9[237766]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:02:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:02:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:00.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:02:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:00 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:00 np0005475493 podman[237890]: 2025-10-08 10:02:00.815871208 +0000 UTC m=+0.063014591 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  8 06:02:01 np0005475493 python3.9[237937]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:02:01 np0005475493 python3.9[238016]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:02:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:01 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:01 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v515: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:02:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:01 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c001d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:01.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:02:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:02.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:02:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:02 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:02 np0005475493 python3.9[238169]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:02:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Oct  8 06:02:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:02:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:02:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:02:02 np0005475493 podman[238237]: 2025-10-08 10:02:02.936665549 +0000 UTC m=+0.080722878 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid)
Oct  8 06:02:03 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:02:03 np0005475493 python3.9[238339]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:02:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:02:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:03 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c001d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:03 np0005475493 python3.9[238417]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:02:03 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v516: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:02:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:03 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:03.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:04.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:04 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c001d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:04 np0005475493 python3.9[238595]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:02:05 np0005475493 python3.9[238673]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:02:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:02:05] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:02:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:02:05] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:02:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:05 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:05 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v517: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:02:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:05 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:02:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:05.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:02:06 np0005475493 python3.9[238826]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 06:02:06 np0005475493 systemd[1]: Reloading.
Oct  8 06:02:06 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 06:02:06 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 06:02:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:06.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:06 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:06 np0005475493 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  8 06:02:06 np0005475493 systemd[1]: virtqemud.service: Deactivated successfully.
Oct  8 06:02:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:07.059Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:02:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:07.059Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:02:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:07.059Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:02:07 np0005475493 python3.9[239018]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:02:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100207 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 06:02:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:07 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:07 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v518: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:02:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:07 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:02:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:07.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:02:07 np0005475493 python3.9[239096]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:02:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:08.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:08 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:02:08 np0005475493 python3.9[239249]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:02:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:08.925Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:02:08 np0005475493 ceph-mgr[73869]: [dashboard INFO request] [192.168.122.100:55410] [POST] [200] [0.002s] [4.0B] [d3cbdf7b-5643-40ed-970d-10daa4db13bd] /api/prometheus_receiver
Oct  8 06:02:09 np0005475493 python3.9[239327]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:02:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:09 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:09 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v519: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct  8 06:02:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:09 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:09.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:10 np0005475493 python3.9[239480]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 06:02:10 np0005475493 systemd[1]: Reloading.
Oct  8 06:02:10 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 06:02:10 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 06:02:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:10.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:10 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:10 np0005475493 systemd[1]: Starting Create netns directory...
Oct  8 06:02:10 np0005475493 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  8 06:02:10 np0005475493 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  8 06:02:10 np0005475493 systemd[1]: Finished Create netns directory.
Oct  8 06:02:11 np0005475493 python3.9[239676]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:02:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:11 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:11 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v520: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 06:02:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:11 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:02:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:11.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:02:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:12.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:12 np0005475493 python3.9[239829]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:02:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:12 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:12 np0005475493 python3.9[239952]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917731.8395934-2135-228740585586635/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:02:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:02:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:13 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:13 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v521: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:02:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:13 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:13.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:13 np0005475493 python3.9[240105]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:02:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:14.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:14 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:14 np0005475493 python3.9[240258]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:02:15 np0005475493 python3.9[240382]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917734.2465703-2210-172891251407674/.source.json _original_basename=.f22asbx2 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:02:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:02:15] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:02:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:02:15] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:02:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:15 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:15 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v522: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:02:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:15 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:15 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:02:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:15.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:16 np0005475493 python3.9[240535]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:02:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:16.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:16 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:17.060Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:02:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:17.061Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:02:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:17.061Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:02:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:17 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:02:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:02:17 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v523: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:02:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:02:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:02:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:17 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003ed0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:17.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:02:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:02:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:02:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:02:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:02:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:18.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:02:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:18 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:02:18 np0005475493 python3.9[240964]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct  8 06:02:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:18 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:02:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:18 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:02:19 np0005475493 python3.9[241117]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  8 06:02:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:19 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v524: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct  8 06:02:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:19 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:19.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:20.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:20 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:20 np0005475493 python3.9[241270]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  8 06:02:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:21 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:21 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v525: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct  8 06:02:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:21 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:21.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:21 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 06:02:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:22.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:22 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:22 np0005475493 python3[241452]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  8 06:02:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:02:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:23 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faadc000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:23 np0005475493 podman[241465]: 2025-10-08 10:02:23.850685815 +0000 UTC m=+1.208772763 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct  8 06:02:23 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v526: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 06:02:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:23 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:02:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:23.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:02:23 np0005475493 podman[241501]: 2025-10-08 10:02:23.969712339 +0000 UTC m=+0.127038293 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  8 06:02:24 np0005475493 podman[241549]: 2025-10-08 10:02:24.017326125 +0000 UTC m=+0.050493299 container create 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd)
Oct  8 06:02:24 np0005475493 podman[241549]: 2025-10-08 10:02:23.985533996 +0000 UTC m=+0.018701190 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct  8 06:02:24 np0005475493 python3[241452]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct  8 06:02:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:02:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:24.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:02:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:24 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:24 np0005475493 python3.9[241767]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 06:02:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:02:25] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 06:02:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:02:25] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 06:02:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:25 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:25 np0005475493 python3.9[241922]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:02:25 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v527: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct  8 06:02:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:25 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faad8000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.002000065s ======
Oct  8 06:02:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:25.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000065s
Oct  8 06:02:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:26.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:26 np0005475493 python3.9[241999]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 06:02:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:26 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08002980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:26 np0005475493 python3.9[242150]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759917746.403927-2474-196243196101786/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:02:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:27.062Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:02:27 np0005475493 python3.9[242227]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  8 06:02:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100227 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 06:02:27 np0005475493 systemd[1]: Reloading.
Oct  8 06:02:27 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 06:02:27 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 06:02:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:27 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:27 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v528: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct  8 06:02:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:27 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:02:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:27.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:02:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:28.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:28 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faad80016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:28 np0005475493 python3.9[242339]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 06:02:28 np0005475493 systemd[1]: Reloading.
Oct  8 06:02:28 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 06:02:28 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 06:02:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:02:28 np0005475493 systemd[1]: Starting multipathd container...
Oct  8 06:02:28 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:02:28 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46965a17b0411b431c812e5ed3182c2ddf67383ce640a60244bca04244814713/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  8 06:02:28 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46965a17b0411b431c812e5ed3182c2ddf67383ce640a60244bca04244814713/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  8 06:02:29 np0005475493 systemd[1]: Started /usr/bin/podman healthcheck run 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311.
Oct  8 06:02:29 np0005475493 podman[242378]: 2025-10-08 10:02:29.035458087 +0000 UTC m=+0.131316511 container init 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3)
Oct  8 06:02:29 np0005475493 multipathd[242394]: + sudo -E kolla_set_configs
Oct  8 06:02:29 np0005475493 podman[242378]: 2025-10-08 10:02:29.080790889 +0000 UTC m=+0.176649293 container start 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  8 06:02:29 np0005475493 podman[242378]: multipathd
Oct  8 06:02:29 np0005475493 systemd[1]: Started multipathd container.
Oct  8 06:02:29 np0005475493 multipathd[242394]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  8 06:02:29 np0005475493 multipathd[242394]: INFO:__main__:Validating config file
Oct  8 06:02:29 np0005475493 multipathd[242394]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  8 06:02:29 np0005475493 multipathd[242394]: INFO:__main__:Writing out command to execute
Oct  8 06:02:29 np0005475493 multipathd[242394]: ++ cat /run_command
Oct  8 06:02:29 np0005475493 multipathd[242394]: + CMD='/usr/sbin/multipathd -d'
Oct  8 06:02:29 np0005475493 multipathd[242394]: + ARGS=
Oct  8 06:02:29 np0005475493 multipathd[242394]: + sudo kolla_copy_cacerts
Oct  8 06:02:29 np0005475493 multipathd[242394]: Running command: '/usr/sbin/multipathd -d'
Oct  8 06:02:29 np0005475493 multipathd[242394]: + [[ ! -n '' ]]
Oct  8 06:02:29 np0005475493 multipathd[242394]: + . kolla_extend_start
Oct  8 06:02:29 np0005475493 multipathd[242394]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct  8 06:02:29 np0005475493 multipathd[242394]: + umask 0022
Oct  8 06:02:29 np0005475493 multipathd[242394]: + exec /usr/sbin/multipathd -d
Oct  8 06:02:29 np0005475493 podman[242402]: 2025-10-08 10:02:29.157701724 +0000 UTC m=+0.065516031 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd)
Oct  8 06:02:29 np0005475493 multipathd[242394]: 3707.848948 | --------start up--------
Oct  8 06:02:29 np0005475493 multipathd[242394]: 3707.848966 | read /etc/multipath.conf
Oct  8 06:02:29 np0005475493 systemd[1]: 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311-2482321632df7677.service: Main process exited, code=exited, status=1/FAILURE
Oct  8 06:02:29 np0005475493 systemd[1]: 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311-2482321632df7677.service: Failed with result 'exit-code'.
Oct  8 06:02:29 np0005475493 multipathd[242394]: 3707.855794 | path checkers start up
Oct  8 06:02:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:29 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08002980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:29 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v529: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct  8 06:02:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:29 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:29.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:30.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:30 np0005475493 python3.9[242586]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 06:02:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:30 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:30 np0005475493 podman[242712]: 2025-10-08 10:02:30.933957093 +0000 UTC m=+0.051727928 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:02:31 np0005475493 python3.9[242757]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 06:02:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:31 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:31 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v530: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct  8 06:02:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:31 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08002980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:31.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:32 np0005475493 python3.9[242923]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  8 06:02:32 np0005475493 systemd[1]: Stopping multipathd container...
Oct  8 06:02:32 np0005475493 multipathd[242394]: 3710.957285 | exit (signal)
Oct  8 06:02:32 np0005475493 multipathd[242394]: 3710.957900 | --------shut down-------
Oct  8 06:02:32 np0005475493 systemd[1]: libpod-1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311.scope: Deactivated successfully.
Oct  8 06:02:32 np0005475493 podman[242928]: 2025-10-08 10:02:32.29890489 +0000 UTC m=+0.075140379 container died 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  8 06:02:32 np0005475493 systemd[1]: 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311-2482321632df7677.timer: Deactivated successfully.
Oct  8 06:02:32 np0005475493 systemd[1]: Stopped /usr/bin/podman healthcheck run 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311.
Oct  8 06:02:32 np0005475493 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311-userdata-shm.mount: Deactivated successfully.
Oct  8 06:02:32 np0005475493 systemd[1]: var-lib-containers-storage-overlay-46965a17b0411b431c812e5ed3182c2ddf67383ce640a60244bca04244814713-merged.mount: Deactivated successfully.
Oct  8 06:02:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:32.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:32 np0005475493 podman[242928]: 2025-10-08 10:02:32.43025583 +0000 UTC m=+0.206491329 container cleanup 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  8 06:02:32 np0005475493 podman[242928]: multipathd
Oct  8 06:02:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:32 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004020 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:32 np0005475493 podman[242956]: multipathd
Oct  8 06:02:32 np0005475493 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct  8 06:02:32 np0005475493 systemd[1]: Stopped multipathd container.
Oct  8 06:02:32 np0005475493 systemd[1]: Starting multipathd container...
Oct  8 06:02:32 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:02:32 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46965a17b0411b431c812e5ed3182c2ddf67383ce640a60244bca04244814713/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  8 06:02:32 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46965a17b0411b431c812e5ed3182c2ddf67383ce640a60244bca04244814713/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  8 06:02:32 np0005475493 systemd[1]: Started /usr/bin/podman healthcheck run 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311.
Oct  8 06:02:32 np0005475493 podman[242969]: 2025-10-08 10:02:32.650949213 +0000 UTC m=+0.122027771 container init 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct  8 06:02:32 np0005475493 multipathd[242982]: + sudo -E kolla_set_configs
Oct  8 06:02:32 np0005475493 podman[242969]: 2025-10-08 10:02:32.681773671 +0000 UTC m=+0.152852219 container start 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  8 06:02:32 np0005475493 podman[242969]: multipathd
Oct  8 06:02:32 np0005475493 systemd[1]: Started multipathd container.
Oct  8 06:02:32 np0005475493 multipathd[242982]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  8 06:02:32 np0005475493 multipathd[242982]: INFO:__main__:Validating config file
Oct  8 06:02:32 np0005475493 multipathd[242982]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  8 06:02:32 np0005475493 multipathd[242982]: INFO:__main__:Writing out command to execute
Oct  8 06:02:32 np0005475493 multipathd[242982]: ++ cat /run_command
Oct  8 06:02:32 np0005475493 multipathd[242982]: + CMD='/usr/sbin/multipathd -d'
Oct  8 06:02:32 np0005475493 multipathd[242982]: + ARGS=
Oct  8 06:02:32 np0005475493 multipathd[242982]: + sudo kolla_copy_cacerts
Oct  8 06:02:32 np0005475493 multipathd[242982]: + [[ ! -n '' ]]
Oct  8 06:02:32 np0005475493 multipathd[242982]: + . kolla_extend_start
Oct  8 06:02:32 np0005475493 multipathd[242982]: Running command: '/usr/sbin/multipathd -d'
Oct  8 06:02:32 np0005475493 multipathd[242982]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct  8 06:02:32 np0005475493 multipathd[242982]: + umask 0022
Oct  8 06:02:32 np0005475493 multipathd[242982]: + exec /usr/sbin/multipathd -d
Oct  8 06:02:32 np0005475493 multipathd[242982]: 3711.473663 | --------start up--------
Oct  8 06:02:32 np0005475493 multipathd[242982]: 3711.473688 | read /etc/multipath.conf
Oct  8 06:02:32 np0005475493 multipathd[242982]: 3711.480874 | path checkers start up
Oct  8 06:02:32 np0005475493 podman[242989]: 2025-10-08 10:02:32.805001041 +0000 UTC m=+0.111724692 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  8 06:02:32 np0005475493 systemd[1]: 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311-38b7d3dee84a03a5.service: Main process exited, code=exited, status=1/FAILURE
Oct  8 06:02:32 np0005475493 systemd[1]: 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311-38b7d3dee84a03a5.service: Failed with result 'exit-code'.
Oct  8 06:02:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:02:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:02:33 np0005475493 podman[243145]: 2025-10-08 10:02:33.572887792 +0000 UTC m=+0.075730768 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.669605) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917753669711, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 998, "num_deletes": 256, "total_data_size": 1736031, "memory_usage": 1765024, "flush_reason": "Manual Compaction"}
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917753688304, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1696587, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17972, "largest_seqno": 18969, "table_properties": {"data_size": 1691732, "index_size": 2379, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 9945, "raw_average_key_size": 18, "raw_value_size": 1682072, "raw_average_value_size": 3126, "num_data_blocks": 107, "num_entries": 538, "num_filter_entries": 538, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759917664, "oldest_key_time": 1759917664, "file_creation_time": 1759917753, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 18744 microseconds, and 4948 cpu microseconds.
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.688361) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1696587 bytes OK
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.688390) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.691197) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.691213) EVENT_LOG_v1 {"time_micros": 1759917753691207, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.691232) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 1731429, prev total WAL file size 1731429, number of live WAL files 2.
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.691817) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1656KB)], [38(11MB)]
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917753691850, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 13804392, "oldest_snapshot_seqno": -1}
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4940 keys, 13315656 bytes, temperature: kUnknown
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917753738086, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 13315656, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13281381, "index_size": 20853, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12357, "raw_key_size": 125645, "raw_average_key_size": 25, "raw_value_size": 13190279, "raw_average_value_size": 2670, "num_data_blocks": 853, "num_entries": 4940, "num_filter_entries": 4940, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759917753, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.738360) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13315656 bytes
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.739303) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 297.9 rd, 287.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 11.5 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(16.0) write-amplify(7.8) OK, records in: 5467, records dropped: 527 output_compression: NoCompression
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.739323) EVENT_LOG_v1 {"time_micros": 1759917753739313, "job": 18, "event": "compaction_finished", "compaction_time_micros": 46333, "compaction_time_cpu_micros": 23261, "output_level": 6, "num_output_files": 1, "total_output_size": 13315656, "num_input_records": 5467, "num_output_records": 4940, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917753739698, "job": 18, "event": "table_file_deletion", "file_number": 40}
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917753741347, "job": 18, "event": "table_file_deletion", "file_number": 38}
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.691739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.741392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.741396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.741398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.741400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:02:33 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:02:33.741401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:02:33 np0005475493 python3.9[243188]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:02:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:33 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faad8001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:33 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v531: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct  8 06:02:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:33 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faad8001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:33.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:02:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:34.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:02:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:34 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08002980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:34 np0005475493 python3.9[243344]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  8 06:02:35 np0005475493 python3.9[243497]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct  8 06:02:35 np0005475493 kernel: Key type psk registered
Oct  8 06:02:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:02:35] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct  8 06:02:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:02:35] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct  8 06:02:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:35 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:35 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v532: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct  8 06:02:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:35 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faad8001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:35.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:36 np0005475493 python3.9[243659]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:02:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:36.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:36 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:36 np0005475493 python3.9[243782]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759917755.8180587-2714-172181750369367/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:02:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:37.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:02:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:37.063Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:02:37 np0005475493 python3.9[243935]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:02:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:37 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08002980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:37 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v533: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct  8 06:02:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:37 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:37.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:38.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:38 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faad8001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:38 np0005475493 python3.9[244088]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  8 06:02:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:02:38 np0005475493 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct  8 06:02:38 np0005475493 systemd[1]: Stopped Load Kernel Modules.
Oct  8 06:02:38 np0005475493 systemd[1]: Stopping Load Kernel Modules...
Oct  8 06:02:38 np0005475493 systemd[1]: Starting Load Kernel Modules...
Oct  8 06:02:38 np0005475493 systemd[1]: Finished Load Kernel Modules.
Oct  8 06:02:39 np0005475493 python3.9[244245]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  8 06:02:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:39 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:39 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v534: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct  8 06:02:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:39 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab08004320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:39.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:40.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:40 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:40 np0005475493 python3.9[244330]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  8 06:02:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:41 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:41 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v535: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:02:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:41 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:41.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:42.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:42 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab080044c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:02:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:43 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faad80036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:43 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v536: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct  8 06:02:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:43 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae40040a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:02:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:43.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:02:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:44.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:44 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:02:45] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct  8 06:02:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:02:45] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct  8 06:02:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:45 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab080044c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:45 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v537: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:02:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:45 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faad80036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:45.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:46.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:46 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae40040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:46 np0005475493 systemd[1]: Reloading.
Oct  8 06:02:46 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 06:02:46 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 06:02:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:47.064Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:02:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:47.067Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:02:47 np0005475493 systemd[1]: Reloading.
Oct  8 06:02:47 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 06:02:47 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 06:02:47 np0005475493 systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  8 06:02:47 np0005475493 systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:02:47
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['volumes', '.nfs', 'default.rgw.log', 'vms', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images']
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:02:47 np0005475493 lvm[244476]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:02:47 np0005475493 lvm[244476]: VG ceph_vg0 finished
Oct  8 06:02:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:47 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:47 np0005475493 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  8 06:02:47 np0005475493 systemd[1]: Starting man-db-cache-update.service...
Oct  8 06:02:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:02:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:02:47 np0005475493 systemd[1]: Reloading.
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v538: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:02:47 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 06:02:47 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 06:02:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:47 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab080044c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:02:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:47.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:02:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:02:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:02:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:02:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:02:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:02:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:02:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:02:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:02:48 np0005475493 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  8 06:02:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:02:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:02:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:48.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:48 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faad80036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:02:49 np0005475493 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  8 06:02:49 np0005475493 systemd[1]: Finished man-db-cache-update.service.
Oct  8 06:02:49 np0005475493 systemd[1]: man-db-cache-update.service: Consumed 1.454s CPU time.
Oct  8 06:02:49 np0005475493 systemd[1]: run-rc5b44645e0fc4f51b473728b08cf1e56.service: Deactivated successfully.
Oct  8 06:02:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100249 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 06:02:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:49 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:49 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v539: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:02:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:49 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae40040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:49.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:50 np0005475493 python3.9[245819]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:02:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:50.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:50 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab080044c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:51 np0005475493 python3.9[245969]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  8 06:02:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:51 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faad80036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:51 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v540: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 06:02:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:51 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:51.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:52 np0005475493 python3.9[246127]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:02:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:02:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:52.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:02:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:52 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004100 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:53 np0005475493 python3.9[246280]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  8 06:02:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:02:53 np0005475493 systemd[1]: Reloading.
Oct  8 06:02:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:53 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab080044c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:53 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 06:02:53 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 06:02:53 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v541: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct  8 06:02:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:53 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faad80036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:02:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:53.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:02:54 np0005475493 podman[246317]: 2025-10-08 10:02:54.192109678 +0000 UTC m=+0.139458352 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  8 06:02:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:54.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:54 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:54 np0005475493 python3.9[246492]: ansible-ansible.builtin.service_facts Invoked
Oct  8 06:02:54 np0005475493 network[246538]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  8 06:02:54 np0005475493 network[246541]: 'network-scripts' will be removed from distribution in near future.
Oct  8 06:02:54 np0005475493 network[246544]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  8 06:02:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:02:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:02:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:02:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:02:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:02:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:02:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:02:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:02:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:02:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:02:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:02:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:02:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:02:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:02:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:02:55] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 06:02:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:02:55] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 06:02:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:55 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:55 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v542: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 06:02:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:55 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab080044c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:02:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:55.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:02:56 np0005475493 podman[246713]: 2025-10-08 10:02:56.155090011 +0000 UTC m=+0.040952074 container create 0dbeb1260cef6713073030800a1166276ce8e477b12d440b0b8cc38bf201b77f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:02:56 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:02:56 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:02:56 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:02:56 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:02:56 np0005475493 systemd[1]: Started libpod-conmon-0dbeb1260cef6713073030800a1166276ce8e477b12d440b0b8cc38bf201b77f.scope.
Oct  8 06:02:56 np0005475493 podman[246713]: 2025-10-08 10:02:56.13697542 +0000 UTC m=+0.022837503 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:02:56 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:02:56 np0005475493 podman[246713]: 2025-10-08 10:02:56.253805395 +0000 UTC m=+0.139667458 container init 0dbeb1260cef6713073030800a1166276ce8e477b12d440b0b8cc38bf201b77f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  8 06:02:56 np0005475493 podman[246713]: 2025-10-08 10:02:56.260348295 +0000 UTC m=+0.146210358 container start 0dbeb1260cef6713073030800a1166276ce8e477b12d440b0b8cc38bf201b77f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hodgkin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:02:56 np0005475493 podman[246713]: 2025-10-08 10:02:56.263233537 +0000 UTC m=+0.149095600 container attach 0dbeb1260cef6713073030800a1166276ce8e477b12d440b0b8cc38bf201b77f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hodgkin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  8 06:02:56 np0005475493 laughing_hodgkin[246735]: 167 167
Oct  8 06:02:56 np0005475493 systemd[1]: libpod-0dbeb1260cef6713073030800a1166276ce8e477b12d440b0b8cc38bf201b77f.scope: Deactivated successfully.
Oct  8 06:02:56 np0005475493 podman[246713]: 2025-10-08 10:02:56.266262684 +0000 UTC m=+0.152124747 container died 0dbeb1260cef6713073030800a1166276ce8e477b12d440b0b8cc38bf201b77f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hodgkin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct  8 06:02:56 np0005475493 systemd[1]: var-lib-containers-storage-overlay-995344a136dc704a55da24d57aa03f80dd7e2c147215736938c6f320743a0751-merged.mount: Deactivated successfully.
Oct  8 06:02:56 np0005475493 podman[246713]: 2025-10-08 10:02:56.312537657 +0000 UTC m=+0.198399720 container remove 0dbeb1260cef6713073030800a1166276ce8e477b12d440b0b8cc38bf201b77f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hodgkin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:02:56 np0005475493 systemd[1]: libpod-conmon-0dbeb1260cef6713073030800a1166276ce8e477b12d440b0b8cc38bf201b77f.scope: Deactivated successfully.
Oct  8 06:02:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:02:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:56.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:02:56 np0005475493 podman[246769]: 2025-10-08 10:02:56.455727996 +0000 UTC m=+0.035508948 container create 0b803e0094d22f3f59204520e6f5c8d42d06e3bb77a381f69034df91533ef896 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  8 06:02:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:56 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:56 np0005475493 systemd[1]: Started libpod-conmon-0b803e0094d22f3f59204520e6f5c8d42d06e3bb77a381f69034df91533ef896.scope.
Oct  8 06:02:56 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:02:56 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74d2fca6ad4bf74a5459362d6bf233683720e08ed81aa228dd475b274b80010/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:02:56 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74d2fca6ad4bf74a5459362d6bf233683720e08ed81aa228dd475b274b80010/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:02:56 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74d2fca6ad4bf74a5459362d6bf233683720e08ed81aa228dd475b274b80010/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:02:56 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74d2fca6ad4bf74a5459362d6bf233683720e08ed81aa228dd475b274b80010/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:02:56 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74d2fca6ad4bf74a5459362d6bf233683720e08ed81aa228dd475b274b80010/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:02:56 np0005475493 podman[246769]: 2025-10-08 10:02:56.439838717 +0000 UTC m=+0.019619689 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:02:56 np0005475493 podman[246769]: 2025-10-08 10:02:56.536920339 +0000 UTC m=+0.116701301 container init 0b803e0094d22f3f59204520e6f5c8d42d06e3bb77a381f69034df91533ef896 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_ishizaka, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:02:56 np0005475493 podman[246769]: 2025-10-08 10:02:56.545157333 +0000 UTC m=+0.124938285 container start 0b803e0094d22f3f59204520e6f5c8d42d06e3bb77a381f69034df91533ef896 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:02:56 np0005475493 podman[246769]: 2025-10-08 10:02:56.548067916 +0000 UTC m=+0.127848898 container attach 0b803e0094d22f3f59204520e6f5c8d42d06e3bb77a381f69034df91533ef896 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:02:56 np0005475493 admiring_ishizaka[246788]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:02:56 np0005475493 admiring_ishizaka[246788]: --> All data devices are unavailable
Oct  8 06:02:56 np0005475493 systemd[1]: libpod-0b803e0094d22f3f59204520e6f5c8d42d06e3bb77a381f69034df91533ef896.scope: Deactivated successfully.
Oct  8 06:02:56 np0005475493 podman[246769]: 2025-10-08 10:02:56.912024381 +0000 UTC m=+0.491805353 container died 0b803e0094d22f3f59204520e6f5c8d42d06e3bb77a381f69034df91533ef896 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:02:56 np0005475493 systemd[1]: var-lib-containers-storage-overlay-b74d2fca6ad4bf74a5459362d6bf233683720e08ed81aa228dd475b274b80010-merged.mount: Deactivated successfully.
Oct  8 06:02:56 np0005475493 podman[246769]: 2025-10-08 10:02:56.958756938 +0000 UTC m=+0.538537880 container remove 0b803e0094d22f3f59204520e6f5c8d42d06e3bb77a381f69034df91533ef896 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  8 06:02:56 np0005475493 systemd[1]: libpod-conmon-0b803e0094d22f3f59204520e6f5c8d42d06e3bb77a381f69034df91533ef896.scope: Deactivated successfully.
Oct  8 06:02:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:57.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:02:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:02:57.071Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:02:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:02:57.400 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:02:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:02:57.401 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:02:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:02:57.401 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:02:57 np0005475493 podman[246963]: 2025-10-08 10:02:57.50888771 +0000 UTC m=+0.040392676 container create b2871e101fefdc37f762a5e3c85cb63b68eb4d4f9a27b57cf73d49c0a622ba97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_albattani, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  8 06:02:57 np0005475493 systemd[1]: Started libpod-conmon-b2871e101fefdc37f762a5e3c85cb63b68eb4d4f9a27b57cf73d49c0a622ba97.scope.
Oct  8 06:02:57 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:02:57 np0005475493 podman[246963]: 2025-10-08 10:02:57.489603593 +0000 UTC m=+0.021108579 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:02:57 np0005475493 podman[246963]: 2025-10-08 10:02:57.603184572 +0000 UTC m=+0.134689558 container init b2871e101fefdc37f762a5e3c85cb63b68eb4d4f9a27b57cf73d49c0a622ba97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct  8 06:02:57 np0005475493 podman[246963]: 2025-10-08 10:02:57.612891284 +0000 UTC m=+0.144396260 container start b2871e101fefdc37f762a5e3c85cb63b68eb4d4f9a27b57cf73d49c0a622ba97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct  8 06:02:57 np0005475493 condescending_albattani[246982]: 167 167
Oct  8 06:02:57 np0005475493 systemd[1]: libpod-b2871e101fefdc37f762a5e3c85cb63b68eb4d4f9a27b57cf73d49c0a622ba97.scope: Deactivated successfully.
Oct  8 06:02:57 np0005475493 podman[246963]: 2025-10-08 10:02:57.675548572 +0000 UTC m=+0.207053568 container attach b2871e101fefdc37f762a5e3c85cb63b68eb4d4f9a27b57cf73d49c0a622ba97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_albattani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:02:57 np0005475493 podman[246963]: 2025-10-08 10:02:57.67613353 +0000 UTC m=+0.207638526 container died b2871e101fefdc37f762a5e3c85cb63b68eb4d4f9a27b57cf73d49c0a622ba97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_albattani, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:02:57 np0005475493 systemd[1]: var-lib-containers-storage-overlay-63cca2c18112661f3e72f5dec0cc2006c79d0a4af22fcc6bfb3f7a3026aff853-merged.mount: Deactivated successfully.
Oct  8 06:02:57 np0005475493 podman[246963]: 2025-10-08 10:02:57.726194265 +0000 UTC m=+0.257699231 container remove b2871e101fefdc37f762a5e3c85cb63b68eb4d4f9a27b57cf73d49c0a622ba97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_albattani, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  8 06:02:57 np0005475493 systemd[1]: libpod-conmon-b2871e101fefdc37f762a5e3c85cb63b68eb4d4f9a27b57cf73d49c0a622ba97.scope: Deactivated successfully.
Oct  8 06:02:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:57 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:57 np0005475493 podman[247022]: 2025-10-08 10:02:57.894662594 +0000 UTC m=+0.052797473 container create 00aab81ad43bdc02b8c251843c6391d33c2791345784203a678fae7fc76b2040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  8 06:02:57 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v543: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 06:02:57 np0005475493 systemd[1]: Started libpod-conmon-00aab81ad43bdc02b8c251843c6391d33c2791345784203a678fae7fc76b2040.scope.
Oct  8 06:02:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:57 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:57 np0005475493 podman[247022]: 2025-10-08 10:02:57.872779973 +0000 UTC m=+0.030914862 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:02:57 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:02:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:02:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:57.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:02:57 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f787e5e5b64ebd96e17cbe4b2a7a6073560bb05bfb7b1b3d413498d9d9395e64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:02:57 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f787e5e5b64ebd96e17cbe4b2a7a6073560bb05bfb7b1b3d413498d9d9395e64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:02:57 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f787e5e5b64ebd96e17cbe4b2a7a6073560bb05bfb7b1b3d413498d9d9395e64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:02:57 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f787e5e5b64ebd96e17cbe4b2a7a6073560bb05bfb7b1b3d413498d9d9395e64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:02:57 np0005475493 podman[247022]: 2025-10-08 10:02:57.992242132 +0000 UTC m=+0.150377001 container init 00aab81ad43bdc02b8c251843c6391d33c2791345784203a678fae7fc76b2040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bartik, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:02:58 np0005475493 podman[247022]: 2025-10-08 10:02:58.000663382 +0000 UTC m=+0.158798281 container start 00aab81ad43bdc02b8c251843c6391d33c2791345784203a678fae7fc76b2040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bartik, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct  8 06:02:58 np0005475493 podman[247022]: 2025-10-08 10:02:58.004725172 +0000 UTC m=+0.162860081 container attach 00aab81ad43bdc02b8c251843c6391d33c2791345784203a678fae7fc76b2040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:02:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:58 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]: {
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:    "1": [
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:        {
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:            "devices": [
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:                "/dev/loop3"
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:            ],
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:            "lv_name": "ceph_lv0",
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:            "lv_size": "21470642176",
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:            "name": "ceph_lv0",
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:            "tags": {
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:                "ceph.cluster_name": "ceph",
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:                "ceph.crush_device_class": "",
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:                "ceph.encrypted": "0",
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:                "ceph.osd_id": "1",
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:                "ceph.type": "block",
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:                "ceph.vdo": "0",
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:                "ceph.with_tpm": "0"
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:            },
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:            "type": "block",
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:            "vg_name": "ceph_vg0"
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:        }
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]:    ]
Oct  8 06:02:58 np0005475493 priceless_bartik[247062]: }
Oct  8 06:02:58 np0005475493 systemd[1]: libpod-00aab81ad43bdc02b8c251843c6391d33c2791345784203a678fae7fc76b2040.scope: Deactivated successfully.
Oct  8 06:02:58 np0005475493 podman[247022]: 2025-10-08 10:02:58.295178411 +0000 UTC m=+0.453313290 container died 00aab81ad43bdc02b8c251843c6391d33c2791345784203a678fae7fc76b2040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:02:58 np0005475493 systemd[1]: var-lib-containers-storage-overlay-f787e5e5b64ebd96e17cbe4b2a7a6073560bb05bfb7b1b3d413498d9d9395e64-merged.mount: Deactivated successfully.
Oct  8 06:02:58 np0005475493 podman[247022]: 2025-10-08 10:02:58.33726517 +0000 UTC m=+0.495400049 container remove 00aab81ad43bdc02b8c251843c6391d33c2791345784203a678fae7fc76b2040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bartik, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  8 06:02:58 np0005475493 systemd[1]: libpod-conmon-00aab81ad43bdc02b8c251843c6391d33c2791345784203a678fae7fc76b2040.scope: Deactivated successfully.
Oct  8 06:02:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:02:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:02:58.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:02:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:58 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab080044c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:02:58 np0005475493 podman[247174]: 2025-10-08 10:02:58.929347457 +0000 UTC m=+0.039625962 container create 2cc76111378f2bfd6ea60218b0b412368270f800cf08292965361afe2c64ad24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_noether, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:02:58 np0005475493 systemd[1]: Started libpod-conmon-2cc76111378f2bfd6ea60218b0b412368270f800cf08292965361afe2c64ad24.scope.
Oct  8 06:02:58 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:02:58 np0005475493 podman[247174]: 2025-10-08 10:02:58.996931402 +0000 UTC m=+0.107209927 container init 2cc76111378f2bfd6ea60218b0b412368270f800cf08292965361afe2c64ad24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_noether, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct  8 06:02:59 np0005475493 podman[247174]: 2025-10-08 10:02:59.002852792 +0000 UTC m=+0.113131297 container start 2cc76111378f2bfd6ea60218b0b412368270f800cf08292965361afe2c64ad24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_noether, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  8 06:02:59 np0005475493 podman[247174]: 2025-10-08 10:02:58.910277665 +0000 UTC m=+0.020556190 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:02:59 np0005475493 podman[247174]: 2025-10-08 10:02:59.006146678 +0000 UTC m=+0.116425183 container attach 2cc76111378f2bfd6ea60218b0b412368270f800cf08292965361afe2c64ad24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:02:59 np0005475493 nervous_noether[247190]: 167 167
Oct  8 06:02:59 np0005475493 systemd[1]: libpod-2cc76111378f2bfd6ea60218b0b412368270f800cf08292965361afe2c64ad24.scope: Deactivated successfully.
Oct  8 06:02:59 np0005475493 conmon[247190]: conmon 2cc76111378f2bfd6ea6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2cc76111378f2bfd6ea60218b0b412368270f800cf08292965361afe2c64ad24.scope/container/memory.events
Oct  8 06:02:59 np0005475493 podman[247174]: 2025-10-08 10:02:59.008661108 +0000 UTC m=+0.118939623 container died 2cc76111378f2bfd6ea60218b0b412368270f800cf08292965361afe2c64ad24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_noether, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  8 06:02:59 np0005475493 systemd[1]: var-lib-containers-storage-overlay-75768573f91abb87aa93d00c09b333c609e70cb598dbd90210195b1f0787446c-merged.mount: Deactivated successfully.
Oct  8 06:02:59 np0005475493 podman[247174]: 2025-10-08 10:02:59.048399952 +0000 UTC m=+0.158678457 container remove 2cc76111378f2bfd6ea60218b0b412368270f800cf08292965361afe2c64ad24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_noether, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:02:59 np0005475493 systemd[1]: libpod-conmon-2cc76111378f2bfd6ea60218b0b412368270f800cf08292965361afe2c64ad24.scope: Deactivated successfully.
Oct  8 06:02:59 np0005475493 podman[247215]: 2025-10-08 10:02:59.224366491 +0000 UTC m=+0.054801457 container create c2e55facf1d571e965e8f2fac8b6bb890893fc1c03a6e251b74f53cab7d5c3c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_taussig, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  8 06:02:59 np0005475493 systemd[1]: Started libpod-conmon-c2e55facf1d571e965e8f2fac8b6bb890893fc1c03a6e251b74f53cab7d5c3c8.scope.
Oct  8 06:02:59 np0005475493 podman[247215]: 2025-10-08 10:02:59.194809085 +0000 UTC m=+0.025244091 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:02:59 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:02:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740717655ff4b47b333ce37578c5943645a30aace5a69ec8d4c337914306dcc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:02:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740717655ff4b47b333ce37578c5943645a30aace5a69ec8d4c337914306dcc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:02:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740717655ff4b47b333ce37578c5943645a30aace5a69ec8d4c337914306dcc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:02:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740717655ff4b47b333ce37578c5943645a30aace5a69ec8d4c337914306dcc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:02:59 np0005475493 podman[247215]: 2025-10-08 10:02:59.325482662 +0000 UTC m=+0.155917638 container init c2e55facf1d571e965e8f2fac8b6bb890893fc1c03a6e251b74f53cab7d5c3c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_taussig, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  8 06:02:59 np0005475493 podman[247215]: 2025-10-08 10:02:59.332676673 +0000 UTC m=+0.163111629 container start c2e55facf1d571e965e8f2fac8b6bb890893fc1c03a6e251b74f53cab7d5c3c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_taussig, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:02:59 np0005475493 podman[247215]: 2025-10-08 10:02:59.350807754 +0000 UTC m=+0.181242730 container attach c2e55facf1d571e965e8f2fac8b6bb890893fc1c03a6e251b74f53cab7d5c3c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_taussig, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:02:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:59 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:59 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v544: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct  8 06:02:59 np0005475493 lvm[247338]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:02:59 np0005475493 lvm[247338]: VG ceph_vg0 finished
Oct  8 06:02:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:02:59 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:02:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:02:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:02:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:02:59.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:03:00 np0005475493 romantic_taussig[247232]: {}
Oct  8 06:03:00 np0005475493 systemd[1]: libpod-c2e55facf1d571e965e8f2fac8b6bb890893fc1c03a6e251b74f53cab7d5c3c8.scope: Deactivated successfully.
Oct  8 06:03:00 np0005475493 podman[247215]: 2025-10-08 10:03:00.039726914 +0000 UTC m=+0.870161870 container died c2e55facf1d571e965e8f2fac8b6bb890893fc1c03a6e251b74f53cab7d5c3c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct  8 06:03:00 np0005475493 systemd[1]: libpod-c2e55facf1d571e965e8f2fac8b6bb890893fc1c03a6e251b74f53cab7d5c3c8.scope: Consumed 1.080s CPU time.
Oct  8 06:03:00 np0005475493 systemd[1]: var-lib-containers-storage-overlay-740717655ff4b47b333ce37578c5943645a30aace5a69ec8d4c337914306dcc5-merged.mount: Deactivated successfully.
Oct  8 06:03:00 np0005475493 podman[247215]: 2025-10-08 10:03:00.091796583 +0000 UTC m=+0.922231539 container remove c2e55facf1d571e965e8f2fac8b6bb890893fc1c03a6e251b74f53cab7d5c3c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:03:00 np0005475493 systemd[1]: libpod-conmon-c2e55facf1d571e965e8f2fac8b6bb890893fc1c03a6e251b74f53cab7d5c3c8.scope: Deactivated successfully.
Oct  8 06:03:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:03:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:03:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:03:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:03:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:00.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:00 np0005475493 python3.9[247450]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 06:03:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:00 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:01 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:03:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:01 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:03:01 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:03:01 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:03:01 np0005475493 python3.9[247628]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 06:03:01 np0005475493 podman[247631]: 2025-10-08 10:03:01.314321055 +0000 UTC m=+0.057675350 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  8 06:03:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:01 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab080044c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:01 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v545: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct  8 06:03:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:01 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c008dc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:03:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:01.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:03:02 np0005475493 python3.9[247804]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 06:03:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:02.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:02 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:02 np0005475493 python3.9[247958]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 06:03:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:03:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:03:03 np0005475493 podman[248084]: 2025-10-08 10:03:03.290911485 +0000 UTC m=+0.067251047 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:03:03 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  8 06:03:03 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 4238 writes, 19K keys, 4238 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.03 MB/s#012Cumulative WAL: 4238 writes, 4238 syncs, 1.00 writes per sync, written: 0.03 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1480 writes, 6020 keys, 1480 commit groups, 1.0 writes per commit group, ingest: 11.06 MB, 0.02 MB/s#012Interval WAL: 1480 writes, 1480 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    110.2      0.28              0.08         9    0.031       0      0       0.0       0.0#012  L6      1/0   12.70 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    153.3    129.7      0.78              0.23         8    0.098     38K   4352       0.0       0.0#012 Sum      1/0   12.70 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3    112.7    124.5      1.07              0.31        17    0.063     38K   4352       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9    191.2    193.8      0.25              0.11         6    0.042     16K   2052       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    153.3    129.7      0.78              0.23         8    0.098     38K   4352       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    111.4      0.28              0.08         8    0.035       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     16.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.030, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.13 GB write, 0.11 MB/s write, 0.12 GB read, 0.10 MB/s read, 1.1 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f7a1ce3350#2 capacity: 304.00 MB usage: 6.35 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000102 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(363,6.02 MB,1.9787%) FilterBlock(18,115.73 KB,0.0371782%) IndexBlock(18,225.48 KB,0.0724341%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  8 06:03:03 np0005475493 python3.9[248129]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 06:03:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:03:03 np0005475493 podman[248132]: 2025-10-08 10:03:03.676140511 +0000 UTC m=+0.055363775 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct  8 06:03:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:03 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:03 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v546: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 06:03:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:03 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab080044c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:03.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:04 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 06:03:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:04.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:04 np0005475493 python3.9[248304]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 06:03:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:04 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c008f40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:05 np0005475493 python3.9[248482]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 06:03:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:03:05] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Oct  8 06:03:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:03:05] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Oct  8 06:03:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:05 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:05 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v547: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 06:03:05 np0005475493 python3.9[248636]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 06:03:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:05 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:05.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:06.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:06 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:03:07.072Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:03:07 np0005475493 python3.9[248790]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:03:07 np0005475493 python3.9[248943]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:03:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:07 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c009860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:07 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v548: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 06:03:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:07 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:03:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:07.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:03:08 np0005475493 python3.9[249096]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:03:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:08.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:08 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab080044c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:03:08 np0005475493 python3.9[249248]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:03:09 np0005475493 python3.9[249401]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:03:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100309 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 06:03:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:09 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae4004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:09 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v549: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 06:03:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:09 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab0c009860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:09.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:10 np0005475493 python3.9[249553]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:03:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:10.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:10 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:10 np0005475493 python3.9[249706]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:03:11 np0005475493 python3.9[249859]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:03:11 np0005475493 kernel: ganesha.nfsd[229945]: segfault at 50 ip 00007fabbb7f232e sp 00007fab6f7fd210 error 4 in libntirpc.so.5.8[7fabbb7d7000+2c000] likely on CPU 2 (core 0, socket 2)
Oct  8 06:03:11 np0005475493 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct  8 06:03:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[227693]: 08/10/2025 10:03:11 : epoch 68e63663 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003fd0 fd 48 proxy ignored for local
Oct  8 06:03:11 np0005475493 systemd[1]: Started Process Core Dump (PID 249884/UID 0).
Oct  8 06:03:11 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v550: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct  8 06:03:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:03:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:11.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:03:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:12.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:12 np0005475493 python3.9[250014]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:03:12 np0005475493 systemd-coredump[249885]: Process 227697 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 54:#012#0  0x00007fabbb7f232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct  8 06:03:13 np0005475493 systemd[1]: systemd-coredump@7-249884-0.service: Deactivated successfully.
Oct  8 06:03:13 np0005475493 systemd[1]: systemd-coredump@7-249884-0.service: Consumed 1.129s CPU time.
Oct  8 06:03:13 np0005475493 python3.9[250166]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:03:13 np0005475493 podman[250171]: 2025-10-08 10:03:13.058718825 +0000 UTC m=+0.024097353 container died 197b32251d649da9ca1a6e7d1df5a7492995554a85c77530709f1630450feb18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:03:13 np0005475493 systemd[1]: var-lib-containers-storage-overlay-6e7cca84133784ef1b75d79f448773b70403e9a746a9cccf658a15d1c5e16e5a-merged.mount: Deactivated successfully.
Oct  8 06:03:13 np0005475493 podman[250171]: 2025-10-08 10:03:13.121844779 +0000 UTC m=+0.087223307 container remove 197b32251d649da9ca1a6e7d1df5a7492995554a85c77530709f1630450feb18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:03:13 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Main process exited, code=exited, status=139/n/a
Oct  8 06:03:13 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Failed with result 'exit-code'.
Oct  8 06:03:13 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.599s CPU time.
Oct  8 06:03:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:03:13 np0005475493 python3.9[250368]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:03:13 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v551: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct  8 06:03:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:13.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:14 np0005475493 python3.9[250521]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:03:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:14.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:14 np0005475493 python3.9[250673]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:03:15 np0005475493 python3.9[250826]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:03:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:03:15] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Oct  8 06:03:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:03:15] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Oct  8 06:03:15 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v552: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:03:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:15.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:16 np0005475493 python3.9[250979]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:03:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:16.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:16 np0005475493 python3.9[251131]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:03:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:03:17.073Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:03:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100317 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 06:03:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:03:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:03:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:03:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:03:17 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v553: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:03:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:17.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:18 np0005475493 python3.9[251284]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 06:03:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:03:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:03:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:03:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:03:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:18.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:03:19 np0005475493 python3.9[251437]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  8 06:03:19 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v554: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:03:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:19.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:20 np0005475493 python3.9[251590]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  8 06:03:20 np0005475493 systemd[1]: Reloading.
Oct  8 06:03:20 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 06:03:20 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 06:03:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:20.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:21 np0005475493 python3.9[251779]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 06:03:21 np0005475493 python3.9[251933]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 06:03:21 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v555: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 06:03:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:21.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:22 np0005475493 python3.9[252087]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 06:03:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:22.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:22 np0005475493 python3.9[252240]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 06:03:23 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Scheduled restart job, restart counter is at 8.
Oct  8 06:03:23 np0005475493 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 06:03:23 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.599s CPU time.
Oct  8 06:03:23 np0005475493 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 06:03:23 np0005475493 podman[252443]: 2025-10-08 10:03:23.550708693 +0000 UTC m=+0.042247810 container create 66d9754add4f5d233bac2b4e75d179001d45850627c14f21452b2ea76b1c37dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  8 06:03:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/058d28317895a40b586ee5c061b698a4e724c357ad19ff7c07b4d7b8ef4bea6d/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  8 06:03:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/058d28317895a40b586ee5c061b698a4e724c357ad19ff7c07b4d7b8ef4bea6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:03:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/058d28317895a40b586ee5c061b698a4e724c357ad19ff7c07b4d7b8ef4bea6d/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:03:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/058d28317895a40b586ee5c061b698a4e724c357ad19ff7c07b4d7b8ef4bea6d/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.uynkmx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:03:23 np0005475493 python3.9[252412]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 06:03:23 np0005475493 podman[252443]: 2025-10-08 10:03:23.617224808 +0000 UTC m=+0.108763945 container init 66d9754add4f5d233bac2b4e75d179001d45850627c14f21452b2ea76b1c37dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  8 06:03:23 np0005475493 podman[252443]: 2025-10-08 10:03:23.623389027 +0000 UTC m=+0.114928144 container start 66d9754add4f5d233bac2b4e75d179001d45850627c14f21452b2ea76b1c37dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  8 06:03:23 np0005475493 podman[252443]: 2025-10-08 10:03:23.528361209 +0000 UTC m=+0.019900336 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:03:23 np0005475493 bash[252443]: 66d9754add4f5d233bac2b4e75d179001d45850627c14f21452b2ea76b1c37dc
Oct  8 06:03:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:23 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  8 06:03:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:23 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  8 06:03:23 np0005475493 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 06:03:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:03:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:23 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  8 06:03:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:23 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  8 06:03:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:23 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  8 06:03:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:23 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  8 06:03:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:23 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  8 06:03:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:23 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:03:23 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v556: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:03:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:03:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:23.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:03:24 np0005475493 python3.9[252652]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 06:03:24 np0005475493 podman[252654]: 2025-10-08 10:03:24.356707313 +0000 UTC m=+0.076908933 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible)
Oct  8 06:03:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:24.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:24 np0005475493 python3.9[252856]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 06:03:25 np0005475493 python3.9[253010]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  8 06:03:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:03:25] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Oct  8 06:03:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:03:25] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Oct  8 06:03:25 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v557: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:03:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:25.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:03:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:26.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:03:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:03:27.075Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:03:27 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v558: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:03:27 np0005475493 python3.9[253165]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:03:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:28.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:03:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:28.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:03:28 np0005475493 python3.9[253318]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:03:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:03:29 np0005475493 python3.9[253470]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:03:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:29 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:03:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:29 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:03:29 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v559: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct  8 06:03:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:03:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:30.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:03:30 np0005475493 python3.9[253624]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:03:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:03:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:30.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:03:30 np0005475493 python3.9[253776]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:03:31 np0005475493 python3.9[253929]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:03:31 np0005475493 podman[254053]: 2025-10-08 10:03:31.694115792 +0000 UTC m=+0.080432407 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  8 06:03:31 np0005475493 python3.9[254097]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:03:31 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v560: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct  8 06:03:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:32.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:32.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:32 np0005475493 python3.9[254250]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:03:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:03:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:03:33 np0005475493 python3.9[254402]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:03:33 np0005475493 podman[254527]: 2025-10-08 10:03:33.444182644 +0000 UTC m=+0.047101327 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.build-date=20251001)
Oct  8 06:03:33 np0005475493 python3.9[254575]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:03:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:03:33 np0005475493 podman[254628]: 2025-10-08 10:03:33.884931301 +0000 UTC m=+0.044534204 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  8 06:03:33 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v561: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 06:03:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:03:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:34.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:03:34 np0005475493 python3.9[254751]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:03:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:34.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:34 np0005475493 python3.9[254903]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:03:35] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  8 06:03:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:03:35] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  8 06:03:35 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v562: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct  8 06:03:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:35 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95e4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:36.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:03:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:36.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:03:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:36 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d80014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:03:37.076Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:03:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:03:37.076Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:03:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:37 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:37 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v563: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct  8 06:03:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:37 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95c4000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:38.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.002000066s ======
Oct  8 06:03:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:38.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000066s
Oct  8 06:03:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:38 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:03:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100339 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 06:03:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:39 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d80021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:39 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v564: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 06:03:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:39 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:40.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:40.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:40 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95c4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:40 np0005475493 python3.9[255076]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct  8 06:03:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:41 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:41 np0005475493 python3.9[255230]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  8 06:03:41 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v565: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct  8 06:03:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:41 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d80021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:42.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:42.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:42 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Oct  8 06:03:42 np0005475493 python3.9[255389]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  8 06:03:42 np0005475493 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  8 06:03:42 np0005475493 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  8 06:03:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:03:43 np0005475493 systemd-logind[798]: New session 57 of user zuul.
Oct  8 06:03:43 np0005475493 systemd[1]: Started Session 57 of User zuul.
Oct  8 06:03:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:43 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95c4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:43 np0005475493 systemd[1]: session-57.scope: Deactivated successfully.
Oct  8 06:03:43 np0005475493 systemd-logind[798]: Session 57 logged out. Waiting for processes to exit.
Oct  8 06:03:43 np0005475493 systemd-logind[798]: Removed session 57.
Oct  8 06:03:43 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v566: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct  8 06:03:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:43 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:03:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:44.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:03:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:44.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:44 np0005475493 python3.9[255578]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:03:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:44 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d80021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:45 np0005475493 python3.9[255724]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917824.0350053-4351-149218111771006/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:03:45 np0005475493 python3.9[255875]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:03:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:03:45] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  8 06:03:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:03:45] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  8 06:03:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:45 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:45 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v567: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct  8 06:03:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:45 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95c4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:46.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:46 np0005475493 python3.9[255952]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:03:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:46.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:46 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:46 np0005475493 python3.9[256102]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:03:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:03:47.077Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:03:47 np0005475493 python3.9[256224]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917826.3231611-4351-95914248744709/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:03:47
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', '.nfs', 'backups', '.rgw.root', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'images', 'cephfs.cephfs.meta', 'vms']
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:03:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:47 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:03:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:03:47 np0005475493 python3.9[256374]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v568: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:03:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:03:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:48 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:03:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:48.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:03:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:03:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:03:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:03:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:03:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:03:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:03:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:03:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:03:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:03:48 np0005475493 python3.9[256496]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917827.448843-4351-106523650856419/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:03:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:48.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:48 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95c4002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:03:49 np0005475493 python3.9[256646]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:03:49 np0005475493 python3.9[256768]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917828.5669715-4351-82408603192714/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:03:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:49 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:49 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v569: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct  8 06:03:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:50 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d80021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:50.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:50.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:50 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:50 np0005475493 python3.9[256921]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:03:51 np0005475493 python3.9[257074]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:03:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100351 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 06:03:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:51 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95c4002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:51 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v570: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:03:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:52 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:52.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:52 np0005475493 python3.9[257227]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 06:03:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:52.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:52 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d80021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:52 np0005475493 python3.9[257379]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:03:53 np0005475493 python3.9[257503]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1759917832.5153425-4630-255554052726038/.source _original_basename=.a0azf5v0 follow=False checksum=aa5b6f2aeb9b9f06df5d35930eb43189722f291f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct  8 06:03:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:03:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:53 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:53 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v571: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct  8 06:03:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:54 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95c4002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:03:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:54.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:03:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:54.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:54 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:54 np0005475493 podman[257531]: 2025-10-08 10:03:54.918977671 +0000 UTC m=+0.078479964 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  8 06:03:55 np0005475493 python3.9[257683]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 06:03:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:03:55] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 06:03:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:03:55] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 06:03:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:55 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d80021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:55 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v572: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 06:03:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:56 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:03:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:56.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:03:56 np0005475493 python3.9[257836]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:03:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:56.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:56 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95c4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:56 np0005475493 python3.9[257957]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917835.990281-4708-8248816668540/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=837ffd9c004e5987a2e117698c56827ebbfeb5b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:03:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:03:57.078Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:03:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:03:57.401 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:03:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:03:57.401 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:03:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:03:57.402 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:03:57 np0005475493 python3.9[258108]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  8 06:03:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:57 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:57 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v573: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 06:03:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:58 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d80021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:03:58.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:58 np0005475493 python3.9[258230]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759917837.320607-4753-264369837493437/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=722ab36345f3375cbdcf911ce8f6e1a8083d7e59 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  8 06:03:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:03:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:03:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:03:58.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:03:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:58 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:03:59 np0005475493 python3.9[258383]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct  8 06:03:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:03:59 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:03:59 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v574: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:04:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:00 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:00.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:00 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:04:00 np0005475493 python3.9[258536]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  8 06:04:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:04:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:00.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:04:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:00 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d80021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:04:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:04:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:04:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:04:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 06:04:01 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:04:01 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 06:04:01 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:04:01 np0005475493 python3[258782]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct  8 06:04:01 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:04:01 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:04:01 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:04:01 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:04:01 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:04:01 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:04:01 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:04:01 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:04:01 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:04:01 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:04:01 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:04:01 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:04:01 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:04:01 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:04:01 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:04:01 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:04:01 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:04:01 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:04:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:01 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95c4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:01 np0005475493 podman[258892]: 2025-10-08 10:04:01.821962626 +0000 UTC m=+0.052181171 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  8 06:04:01 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v575: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:04:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:02 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95c4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:04:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:02.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:04:02 np0005475493 podman[258985]: 2025-10-08 10:04:02.30179117 +0000 UTC m=+0.071855458 container create 7af71b2119af77dfe8766b81dfd62877f37e80d42d80e08f5e79fa1a57a09dc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:04:02 np0005475493 systemd[1]: Started libpod-conmon-7af71b2119af77dfe8766b81dfd62877f37e80d42d80e08f5e79fa1a57a09dc2.scope.
Oct  8 06:04:02 np0005475493 podman[258985]: 2025-10-08 10:04:02.263745178 +0000 UTC m=+0.033809486 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:04:02 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:04:02 np0005475493 podman[258985]: 2025-10-08 10:04:02.394223565 +0000 UTC m=+0.164287873 container init 7af71b2119af77dfe8766b81dfd62877f37e80d42d80e08f5e79fa1a57a09dc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_cannon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  8 06:04:02 np0005475493 podman[258985]: 2025-10-08 10:04:02.402621657 +0000 UTC m=+0.172685945 container start 7af71b2119af77dfe8766b81dfd62877f37e80d42d80e08f5e79fa1a57a09dc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_cannon, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:04:02 np0005475493 podman[258985]: 2025-10-08 10:04:02.40858418 +0000 UTC m=+0.178648488 container attach 7af71b2119af77dfe8766b81dfd62877f37e80d42d80e08f5e79fa1a57a09dc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_cannon, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct  8 06:04:02 np0005475493 systemd[1]: libpod-7af71b2119af77dfe8766b81dfd62877f37e80d42d80e08f5e79fa1a57a09dc2.scope: Deactivated successfully.
Oct  8 06:04:02 np0005475493 bold_cannon[259002]: 167 167
Oct  8 06:04:02 np0005475493 conmon[259002]: conmon 7af71b2119af77dfe876 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7af71b2119af77dfe8766b81dfd62877f37e80d42d80e08f5e79fa1a57a09dc2.scope/container/memory.events
Oct  8 06:04:02 np0005475493 podman[258985]: 2025-10-08 10:04:02.412889709 +0000 UTC m=+0.182953997 container died 7af71b2119af77dfe8766b81dfd62877f37e80d42d80e08f5e79fa1a57a09dc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_cannon, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:04:02 np0005475493 systemd[1]: var-lib-containers-storage-overlay-e831925f32963ea8b6e8d3adc36d814671f0d156e880f19e481750d4762484ab-merged.mount: Deactivated successfully.
Oct  8 06:04:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:02.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:02 np0005475493 podman[258985]: 2025-10-08 10:04:02.490890856 +0000 UTC m=+0.260955144 container remove 7af71b2119af77dfe8766b81dfd62877f37e80d42d80e08f5e79fa1a57a09dc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:04:02 np0005475493 systemd[1]: libpod-conmon-7af71b2119af77dfe8766b81dfd62877f37e80d42d80e08f5e79fa1a57a09dc2.scope: Deactivated successfully.
Oct  8 06:04:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:02 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95c4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:02 np0005475493 podman[259025]: 2025-10-08 10:04:02.682207563 +0000 UTC m=+0.070117832 container create 00aaac3fe02eefd7dc286835a6002cd076939b4f1a4a2c1de35aae7fe8414dd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.709062) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917842709096, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1031, "num_deletes": 251, "total_data_size": 1789867, "memory_usage": 1815304, "flush_reason": "Manual Compaction"}
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Oct  8 06:04:02 np0005475493 systemd[1]: Started libpod-conmon-00aaac3fe02eefd7dc286835a6002cd076939b4f1a4a2c1de35aae7fe8414dd7.scope.
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917842717499, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 1752255, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18970, "largest_seqno": 20000, "table_properties": {"data_size": 1747236, "index_size": 2543, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10804, "raw_average_key_size": 19, "raw_value_size": 1737251, "raw_average_value_size": 3164, "num_data_blocks": 113, "num_entries": 549, "num_filter_entries": 549, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759917753, "oldest_key_time": 1759917753, "file_creation_time": 1759917842, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 8472 microseconds, and 3658 cpu microseconds.
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.717534) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 1752255 bytes OK
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.717552) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.719629) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.719641) EVENT_LOG_v1 {"time_micros": 1759917842719637, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.719657) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1785159, prev total WAL file size 1785159, number of live WAL files 2.
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.720135) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(1711KB)], [41(12MB)]
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917842720171, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 15067911, "oldest_snapshot_seqno": -1}
Oct  8 06:04:02 np0005475493 podman[259025]: 2025-10-08 10:04:02.637559977 +0000 UTC m=+0.025470276 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:04:02 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:04:02 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66aae17311de4f14e56d78efe454e8c9f44ba64de5817e24c395359e7942755/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:02 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66aae17311de4f14e56d78efe454e8c9f44ba64de5817e24c395359e7942755/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:02 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66aae17311de4f14e56d78efe454e8c9f44ba64de5817e24c395359e7942755/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:02 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66aae17311de4f14e56d78efe454e8c9f44ba64de5817e24c395359e7942755/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:02 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66aae17311de4f14e56d78efe454e8c9f44ba64de5817e24c395359e7942755/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4971 keys, 12860429 bytes, temperature: kUnknown
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917842888931, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 12860429, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12826366, "index_size": 20513, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 126829, "raw_average_key_size": 25, "raw_value_size": 12735195, "raw_average_value_size": 2561, "num_data_blocks": 838, "num_entries": 4971, "num_filter_entries": 4971, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759917842, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.889242) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 12860429 bytes
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.942554) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 89.2 rd, 76.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 12.7 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(15.9) write-amplify(7.3) OK, records in: 5489, records dropped: 518 output_compression: NoCompression
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.942595) EVENT_LOG_v1 {"time_micros": 1759917842942578, "job": 20, "event": "compaction_finished", "compaction_time_micros": 168835, "compaction_time_cpu_micros": 22288, "output_level": 6, "num_output_files": 1, "total_output_size": 12860429, "num_input_records": 5489, "num_output_records": 4971, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917842943138, "job": 20, "event": "table_file_deletion", "file_number": 43}
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759917842945197, "job": 20, "event": "table_file_deletion", "file_number": 41}
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.720100) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.945228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.945234) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.945236) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.945238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:04:02 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:04:02.945240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:04:02 np0005475493 podman[259025]: 2025-10-08 10:04:02.945884085 +0000 UTC m=+0.333794384 container init 00aaac3fe02eefd7dc286835a6002cd076939b4f1a4a2c1de35aae7fe8414dd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  8 06:04:02 np0005475493 podman[259025]: 2025-10-08 10:04:02.954485594 +0000 UTC m=+0.342395863 container start 00aaac3fe02eefd7dc286835a6002cd076939b4f1a4a2c1de35aae7fe8414dd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct  8 06:04:02 np0005475493 podman[259025]: 2025-10-08 10:04:02.988449964 +0000 UTC m=+0.376360233 container attach 00aaac3fe02eefd7dc286835a6002cd076939b4f1a4a2c1de35aae7fe8414dd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_swartz, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:04:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:03 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:04:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:03 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:04:03 np0005475493 blissful_swartz[259040]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:04:03 np0005475493 blissful_swartz[259040]: --> All data devices are unavailable
Oct  8 06:04:03 np0005475493 systemd[1]: libpod-00aaac3fe02eefd7dc286835a6002cd076939b4f1a4a2c1de35aae7fe8414dd7.scope: Deactivated successfully.
Oct  8 06:04:03 np0005475493 podman[259025]: 2025-10-08 10:04:03.338509074 +0000 UTC m=+0.726419333 container died 00aaac3fe02eefd7dc286835a6002cd076939b4f1a4a2c1de35aae7fe8414dd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_swartz, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:04:03 np0005475493 systemd[1]: var-lib-containers-storage-overlay-a66aae17311de4f14e56d78efe454e8c9f44ba64de5817e24c395359e7942755-merged.mount: Deactivated successfully.
Oct  8 06:04:03 np0005475493 podman[259025]: 2025-10-08 10:04:03.382976785 +0000 UTC m=+0.770887054 container remove 00aaac3fe02eefd7dc286835a6002cd076939b4f1a4a2c1de35aae7fe8414dd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:04:03 np0005475493 systemd[1]: libpod-conmon-00aaac3fe02eefd7dc286835a6002cd076939b4f1a4a2c1de35aae7fe8414dd7.scope: Deactivated successfully.
Oct  8 06:04:03 np0005475493 podman[259092]: 2025-10-08 10:04:03.61745841 +0000 UTC m=+0.068792199 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001)
Oct  8 06:04:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:04:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:03 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d80021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:03 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v576: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 06:04:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:04 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:04:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:04.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:04:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:04.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:04 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:04:05] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:04:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:04:05] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:04:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:05 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:05 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v577: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 06:04:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:06 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:06.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:06 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 06:04:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:06.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:06 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d80021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:04:07.078Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:04:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:04:07.079Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:04:07 np0005475493 podman[259191]: 2025-10-08 10:04:07.679741774 +0000 UTC m=+2.948483934 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  8 06:04:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:07 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:07 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v578: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 06:04:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:08 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:08.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:08.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:08 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:04:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:09 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:09 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v579: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 06:04:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:10 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:04:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:10.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:04:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:10.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:10 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100411 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 06:04:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:11 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d0001a60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:11 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v580: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 06:04:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:12 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:12.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:12 np0005475493 podman[258838]: 2025-10-08 10:04:12.156256358 +0000 UTC m=+10.840067356 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct  8 06:04:12 np0005475493 podman[259283]: 2025-10-08 10:04:12.247351049 +0000 UTC m=+0.039217901 container create e2b1fddfa4dd2c912668739b987a953a001a98cd47462f4fd12f63726f1522d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True)
Oct  8 06:04:12 np0005475493 systemd[1]: Started libpod-conmon-e2b1fddfa4dd2c912668739b987a953a001a98cd47462f4fd12f63726f1522d3.scope.
Oct  8 06:04:12 np0005475493 podman[259310]: 2025-10-08 10:04:12.297566136 +0000 UTC m=+0.050000912 container create 17713851e79d1ffcd0dd8102ba919b86e4d786f034fb81664009fcbd548787f8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  8 06:04:12 np0005475493 podman[259310]: 2025-10-08 10:04:12.272744091 +0000 UTC m=+0.025178897 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct  8 06:04:12 np0005475493 python3[258782]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct  8 06:04:12 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:04:12 np0005475493 podman[259283]: 2025-10-08 10:04:12.22761626 +0000 UTC m=+0.019483132 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:04:12 np0005475493 podman[259283]: 2025-10-08 10:04:12.326888065 +0000 UTC m=+0.118754937 container init e2b1fddfa4dd2c912668739b987a953a001a98cd47462f4fd12f63726f1522d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:04:12 np0005475493 podman[259283]: 2025-10-08 10:04:12.33412199 +0000 UTC m=+0.125988842 container start e2b1fddfa4dd2c912668739b987a953a001a98cd47462f4fd12f63726f1522d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_chebyshev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct  8 06:04:12 np0005475493 podman[259283]: 2025-10-08 10:04:12.33785485 +0000 UTC m=+0.129721702 container attach e2b1fddfa4dd2c912668739b987a953a001a98cd47462f4fd12f63726f1522d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_chebyshev, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct  8 06:04:12 np0005475493 sad_chebyshev[259325]: 167 167
Oct  8 06:04:12 np0005475493 systemd[1]: libpod-e2b1fddfa4dd2c912668739b987a953a001a98cd47462f4fd12f63726f1522d3.scope: Deactivated successfully.
Oct  8 06:04:12 np0005475493 podman[259283]: 2025-10-08 10:04:12.340835677 +0000 UTC m=+0.132702539 container died e2b1fddfa4dd2c912668739b987a953a001a98cd47462f4fd12f63726f1522d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_chebyshev, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct  8 06:04:12 np0005475493 systemd[1]: var-lib-containers-storage-overlay-535185971cf6724a5965de38d47e1771e774d3c85a9f40b609e63e3de4eff4ab-merged.mount: Deactivated successfully.
Oct  8 06:04:12 np0005475493 podman[259283]: 2025-10-08 10:04:12.387111987 +0000 UTC m=+0.178978839 container remove e2b1fddfa4dd2c912668739b987a953a001a98cd47462f4fd12f63726f1522d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  8 06:04:12 np0005475493 systemd[1]: libpod-conmon-e2b1fddfa4dd2c912668739b987a953a001a98cd47462f4fd12f63726f1522d3.scope: Deactivated successfully.
Oct  8 06:04:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:12.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:12 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:12 np0005475493 podman[259382]: 2025-10-08 10:04:12.589694309 +0000 UTC m=+0.068633394 container create 263199f3fe24b920bd1df3631db6417d67c0fffa3c425949ce43d72e1506d754 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:04:12 np0005475493 podman[259382]: 2025-10-08 10:04:12.542070796 +0000 UTC m=+0.021009901 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:04:12 np0005475493 systemd[1]: Started libpod-conmon-263199f3fe24b920bd1df3631db6417d67c0fffa3c425949ce43d72e1506d754.scope.
Oct  8 06:04:12 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:04:12 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a85e9308449801274e0acc2b2f2e3ca58161c77cbe6136e406b4656d6dd86ac4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:12 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a85e9308449801274e0acc2b2f2e3ca58161c77cbe6136e406b4656d6dd86ac4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:12 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a85e9308449801274e0acc2b2f2e3ca58161c77cbe6136e406b4656d6dd86ac4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:12 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a85e9308449801274e0acc2b2f2e3ca58161c77cbe6136e406b4656d6dd86ac4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:13 np0005475493 podman[259382]: 2025-10-08 10:04:13.387200343 +0000 UTC m=+0.866139518 container init 263199f3fe24b920bd1df3631db6417d67c0fffa3c425949ce43d72e1506d754 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_gauss, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:04:13 np0005475493 podman[259382]: 2025-10-08 10:04:13.399395828 +0000 UTC m=+0.878334943 container start 263199f3fe24b920bd1df3631db6417d67c0fffa3c425949ce43d72e1506d754 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_gauss, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:04:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:04:13 np0005475493 magical_gauss[259416]: {
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:    "1": [
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:        {
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:            "devices": [
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:                "/dev/loop3"
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:            ],
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:            "lv_name": "ceph_lv0",
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:            "lv_size": "21470642176",
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:            "name": "ceph_lv0",
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:            "tags": {
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:                "ceph.cluster_name": "ceph",
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:                "ceph.crush_device_class": "",
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:                "ceph.encrypted": "0",
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:                "ceph.osd_id": "1",
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:                "ceph.type": "block",
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:                "ceph.vdo": "0",
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:                "ceph.with_tpm": "0"
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:            },
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:            "type": "block",
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:            "vg_name": "ceph_vg0"
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:        }
Oct  8 06:04:13 np0005475493 magical_gauss[259416]:    ]
Oct  8 06:04:13 np0005475493 magical_gauss[259416]: }
Oct  8 06:04:13 np0005475493 podman[259382]: 2025-10-08 10:04:13.731656101 +0000 UTC m=+1.210595206 container attach 263199f3fe24b920bd1df3631db6417d67c0fffa3c425949ce43d72e1506d754 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_gauss, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:04:13 np0005475493 systemd[1]: libpod-263199f3fe24b920bd1df3631db6417d67c0fffa3c425949ce43d72e1506d754.scope: Deactivated successfully.
Oct  8 06:04:13 np0005475493 podman[259426]: 2025-10-08 10:04:13.778247841 +0000 UTC m=+0.027189793 container died 263199f3fe24b920bd1df3631db6417d67c0fffa3c425949ce43d72e1506d754 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Oct  8 06:04:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:13 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95bc004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:13 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v581: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 06:04:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:14 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95d0001a60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:04:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:14.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:04:14 np0005475493 systemd[1]: var-lib-containers-storage-overlay-a85e9308449801274e0acc2b2f2e3ca58161c77cbe6136e406b4656d6dd86ac4-merged.mount: Deactivated successfully.
Oct  8 06:04:14 np0005475493 podman[259426]: 2025-10-08 10:04:14.346125236 +0000 UTC m=+0.595067168 container remove 263199f3fe24b920bd1df3631db6417d67c0fffa3c425949ce43d72e1506d754 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_gauss, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:04:14 np0005475493 systemd[1]: libpod-conmon-263199f3fe24b920bd1df3631db6417d67c0fffa3c425949ce43d72e1506d754.scope: Deactivated successfully.
Oct  8 06:04:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:14.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:14 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:15 np0005475493 podman[259533]: 2025-10-08 10:04:14.959792076 +0000 UTC m=+0.023131390 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:04:15 np0005475493 podman[259533]: 2025-10-08 10:04:15.070488711 +0000 UTC m=+0.133827995 container create e21d64260393e79b0d6d58e2c5943d429a5018360b4f2fea5c16e05b0efd83c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:04:15 np0005475493 systemd[1]: Started libpod-conmon-e21d64260393e79b0d6d58e2c5943d429a5018360b4f2fea5c16e05b0efd83c6.scope.
Oct  8 06:04:15 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:04:15 np0005475493 podman[259533]: 2025-10-08 10:04:15.158919817 +0000 UTC m=+0.222259151 container init e21d64260393e79b0d6d58e2c5943d429a5018360b4f2fea5c16e05b0efd83c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:04:15 np0005475493 podman[259533]: 2025-10-08 10:04:15.165099046 +0000 UTC m=+0.228438330 container start e21d64260393e79b0d6d58e2c5943d429a5018360b4f2fea5c16e05b0efd83c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sinoussi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:04:15 np0005475493 elastic_sinoussi[259550]: 167 167
Oct  8 06:04:15 np0005475493 systemd[1]: libpod-e21d64260393e79b0d6d58e2c5943d429a5018360b4f2fea5c16e05b0efd83c6.scope: Deactivated successfully.
Oct  8 06:04:15 np0005475493 podman[259533]: 2025-10-08 10:04:15.191315556 +0000 UTC m=+0.254654840 container attach e21d64260393e79b0d6d58e2c5943d429a5018360b4f2fea5c16e05b0efd83c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sinoussi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  8 06:04:15 np0005475493 podman[259533]: 2025-10-08 10:04:15.191986118 +0000 UTC m=+0.255325392 container died e21d64260393e79b0d6d58e2c5943d429a5018360b4f2fea5c16e05b0efd83c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  8 06:04:15 np0005475493 systemd[1]: var-lib-containers-storage-overlay-ba8cb00e5e4ca3891e4072f9532e67994b7ec62615360e50019acb434dab8f96-merged.mount: Deactivated successfully.
Oct  8 06:04:15 np0005475493 podman[259533]: 2025-10-08 10:04:15.733706595 +0000 UTC m=+0.797045899 container remove e21d64260393e79b0d6d58e2c5943d429a5018360b4f2fea5c16e05b0efd83c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sinoussi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:04:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:04:15] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:04:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:04:15] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:04:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[252457]: 08/10/2025 10:04:15 : epoch 68e636eb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f95b8003c10 fd 39 proxy ignored for local
Oct  8 06:04:15 np0005475493 kernel: ganesha.nfsd[259218]: segfault at 50 ip 00007f968ea3532e sp 00007f964fffe210 error 4 in libntirpc.so.5.8[7f968ea1a000+2c000] likely on CPU 1 (core 0, socket 1)
Oct  8 06:04:15 np0005475493 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct  8 06:04:15 np0005475493 systemd[1]: Started Process Core Dump (PID 259617/UID 0).
Oct  8 06:04:15 np0005475493 systemd[1]: libpod-conmon-e21d64260393e79b0d6d58e2c5943d429a5018360b4f2fea5c16e05b0efd83c6.scope: Deactivated successfully.
Oct  8 06:04:15 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v582: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:04:15 np0005475493 podman[259627]: 2025-10-08 10:04:15.89465683 +0000 UTC m=+0.027120391 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:04:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:16.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:16 np0005475493 podman[259627]: 2025-10-08 10:04:16.047109958 +0000 UTC m=+0.179573479 container create 183e0b8de8637f1b93d74e47e5cc95b393e3e7d074b8e77b2ce63a699c5a2a20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_heyrovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:04:16 np0005475493 systemd[1]: Started libpod-conmon-183e0b8de8637f1b93d74e47e5cc95b393e3e7d074b8e77b2ce63a699c5a2a20.scope.
Oct  8 06:04:16 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:04:16 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a31c5806f97eff4fe65ca41f2cee8b4f0ddc011412f9b663efb0cb5482c4777/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:16 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a31c5806f97eff4fe65ca41f2cee8b4f0ddc011412f9b663efb0cb5482c4777/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:16 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a31c5806f97eff4fe65ca41f2cee8b4f0ddc011412f9b663efb0cb5482c4777/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:16 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a31c5806f97eff4fe65ca41f2cee8b4f0ddc011412f9b663efb0cb5482c4777/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:16 np0005475493 podman[259627]: 2025-10-08 10:04:16.219527324 +0000 UTC m=+0.351990875 container init 183e0b8de8637f1b93d74e47e5cc95b393e3e7d074b8e77b2ce63a699c5a2a20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct  8 06:04:16 np0005475493 podman[259627]: 2025-10-08 10:04:16.232276886 +0000 UTC m=+0.364740397 container start 183e0b8de8637f1b93d74e47e5cc95b393e3e7d074b8e77b2ce63a699c5a2a20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:04:16 np0005475493 python3.9[259717]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 06:04:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:16.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:16 np0005475493 podman[259627]: 2025-10-08 10:04:16.726264359 +0000 UTC m=+0.858727880 container attach 183e0b8de8637f1b93d74e47e5cc95b393e3e7d074b8e77b2ce63a699c5a2a20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:04:16 np0005475493 lvm[259821]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:04:16 np0005475493 lvm[259821]: VG ceph_vg0 finished
Oct  8 06:04:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:04:17.079Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:04:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:04:17.081Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:04:17 np0005475493 youthful_heyrovsky[259720]: {}
Oct  8 06:04:17 np0005475493 systemd[1]: libpod-183e0b8de8637f1b93d74e47e5cc95b393e3e7d074b8e77b2ce63a699c5a2a20.scope: Deactivated successfully.
Oct  8 06:04:17 np0005475493 systemd[1]: libpod-183e0b8de8637f1b93d74e47e5cc95b393e3e7d074b8e77b2ce63a699c5a2a20.scope: Consumed 1.094s CPU time.
Oct  8 06:04:17 np0005475493 podman[259952]: 2025-10-08 10:04:17.332255979 +0000 UTC m=+0.023414919 container died 183e0b8de8637f1b93d74e47e5cc95b393e3e7d074b8e77b2ce63a699c5a2a20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  8 06:04:17 np0005475493 python3.9[259951]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct  8 06:04:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:04:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:04:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:04:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:04:17 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v583: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:04:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:04:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:18.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:04:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:04:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:04:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:04:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:04:18 np0005475493 systemd-coredump[259622]: Process 252462 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 55:#012#0  0x00007f968ea3532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct  8 06:04:18 np0005475493 systemd[1]: systemd-coredump@8-259617-0.service: Deactivated successfully.
Oct  8 06:04:18 np0005475493 systemd[1]: systemd-coredump@8-259617-0.service: Consumed 1.289s CPU time.
Oct  8 06:04:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:04:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:18.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:04:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:04:18 np0005475493 systemd[1]: var-lib-containers-storage-overlay-1a31c5806f97eff4fe65ca41f2cee8b4f0ddc011412f9b663efb0cb5482c4777-merged.mount: Deactivated successfully.
Oct  8 06:04:19 np0005475493 podman[259952]: 2025-10-08 10:04:19.016687125 +0000 UTC m=+1.707846055 container remove 183e0b8de8637f1b93d74e47e5cc95b393e3e7d074b8e77b2ce63a699c5a2a20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:04:19 np0005475493 systemd[1]: libpod-conmon-183e0b8de8637f1b93d74e47e5cc95b393e3e7d074b8e77b2ce63a699c5a2a20.scope: Deactivated successfully.
Oct  8 06:04:19 np0005475493 podman[260122]: 2025-10-08 10:04:19.049511738 +0000 UTC m=+0.713520494 container died 66d9754add4f5d233bac2b4e75d179001d45850627c14f21452b2ea76b1c37dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  8 06:04:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:04:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:04:19 np0005475493 systemd[1]: var-lib-containers-storage-overlay-058d28317895a40b586ee5c061b698a4e724c357ad19ff7c07b4d7b8ef4bea6d-merged.mount: Deactivated successfully.
Oct  8 06:04:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:04:19 np0005475493 python3.9[260117]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  8 06:04:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:04:19 np0005475493 podman[260122]: 2025-10-08 10:04:19.159716718 +0000 UTC m=+0.823725444 container remove 66d9754add4f5d233bac2b4e75d179001d45850627c14f21452b2ea76b1c37dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  8 06:04:19 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Main process exited, code=exited, status=139/n/a
Oct  8 06:04:19 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Failed with result 'exit-code'.
Oct  8 06:04:19 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.628s CPU time.
Oct  8 06:04:19 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v584: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:04:20 np0005475493 python3[260345]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct  8 06:04:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:20.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:20 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:04:20 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:04:20 np0005475493 podman[260382]: 2025-10-08 10:04:20.227304182 +0000 UTC m=+0.055899862 container create 10bd2f5ab045dd8f96a9121ffea55bc5cdad29c81800340bfb8b3be5bc672ab2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=edpm, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Oct  8 06:04:20 np0005475493 podman[260382]: 2025-10-08 10:04:20.196742112 +0000 UTC m=+0.025337822 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct  8 06:04:20 np0005475493 python3[260345]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 kolla_start
Oct  8 06:04:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:20.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:21 np0005475493 python3.9[260572]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 06:04:21 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v585: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct  8 06:04:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:04:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:22.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:04:22 np0005475493 python3.9[260727]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:04:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:22.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:22 np0005475493 python3.9[260879]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759917862.1342084-5029-258447652750949/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  8 06:04:23 np0005475493 python3.9[260955]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  8 06:04:23 np0005475493 systemd[1]: Reloading.
Oct  8 06:04:23 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 06:04:23 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 06:04:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100423 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 06:04:23 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v586: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct  8 06:04:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:04:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:24.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:24 np0005475493 python3.9[261067]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  8 06:04:24 np0005475493 systemd[1]: Reloading.
Oct  8 06:04:24 np0005475493 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  8 06:04:24 np0005475493 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  8 06:04:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:24.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:24 np0005475493 systemd[1]: Starting nova_compute container...
Oct  8 06:04:24 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:04:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15207b115f66f3b9ef265fd8e31c6a3ae8a40fba9e93cc4c4511e0cb7364338e/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15207b115f66f3b9ef265fd8e31c6a3ae8a40fba9e93cc4c4511e0cb7364338e/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15207b115f66f3b9ef265fd8e31c6a3ae8a40fba9e93cc4c4511e0cb7364338e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15207b115f66f3b9ef265fd8e31c6a3ae8a40fba9e93cc4c4511e0cb7364338e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15207b115f66f3b9ef265fd8e31c6a3ae8a40fba9e93cc4c4511e0cb7364338e/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:24 np0005475493 podman[261107]: 2025-10-08 10:04:24.884504578 +0000 UTC m=+0.140919066 container init 10bd2f5ab045dd8f96a9121ffea55bc5cdad29c81800340bfb8b3be5bc672ab2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=edpm, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  8 06:04:24 np0005475493 podman[261107]: 2025-10-08 10:04:24.891642399 +0000 UTC m=+0.148056857 container start 10bd2f5ab045dd8f96a9121ffea55bc5cdad29c81800340bfb8b3be5bc672ab2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.schema-version=1.0)
Oct  8 06:04:24 np0005475493 nova_compute[261144]: + sudo -E kolla_set_configs
Oct  8 06:04:24 np0005475493 podman[261107]: nova_compute
Oct  8 06:04:24 np0005475493 systemd[1]: Started nova_compute container.
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Validating config file
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Copying service configuration files
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Deleting /etc/ceph
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Creating directory /etc/ceph
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Setting permission for /etc/ceph
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Writing out command to execute
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  8 06:04:24 np0005475493 nova_compute[261144]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  8 06:04:24 np0005475493 nova_compute[261144]: ++ cat /run_command
Oct  8 06:04:25 np0005475493 nova_compute[261144]: + CMD=nova-compute
Oct  8 06:04:25 np0005475493 nova_compute[261144]: + ARGS=
Oct  8 06:04:25 np0005475493 nova_compute[261144]: + sudo kolla_copy_cacerts
Oct  8 06:04:25 np0005475493 nova_compute[261144]: + [[ ! -n '' ]]
Oct  8 06:04:25 np0005475493 nova_compute[261144]: + . kolla_extend_start
Oct  8 06:04:25 np0005475493 nova_compute[261144]: Running command: 'nova-compute'
Oct  8 06:04:25 np0005475493 nova_compute[261144]: + echo 'Running command: '\''nova-compute'\'''
Oct  8 06:04:25 np0005475493 nova_compute[261144]: + umask 0022
Oct  8 06:04:25 np0005475493 nova_compute[261144]: + exec nova-compute
Oct  8 06:04:25 np0005475493 podman[261155]: 2025-10-08 10:04:25.058752362 +0000 UTC m=+0.098152790 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  8 06:04:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:04:25] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:04:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:04:25] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:04:25 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v587: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 06:04:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:26.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:26 np0005475493 python3.9[261335]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 06:04:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:26.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:04:27.082Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:04:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:04:27.083Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:04:27 np0005475493 python3.9[261486]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 06:04:27 np0005475493 python3.9[261637]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  8 06:04:27 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v588: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 06:04:28 np0005475493 nova_compute[261144]: 2025-10-08 10:04:28.048 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  8 06:04:28 np0005475493 nova_compute[261144]: 2025-10-08 10:04:28.049 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  8 06:04:28 np0005475493 nova_compute[261144]: 2025-10-08 10:04:28.049 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  8 06:04:28 np0005475493 nova_compute[261144]: 2025-10-08 10:04:28.049 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct  8 06:04:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:28.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:28 np0005475493 nova_compute[261144]: 2025-10-08 10:04:28.229 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:04:28 np0005475493 nova_compute[261144]: 2025-10-08 10:04:28.252 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:04:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:28.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:28 np0005475493 python3.9[261794]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  8 06:04:28 np0005475493 nova_compute[261144]: 2025-10-08 10:04:28.817 2 INFO nova.virt.driver [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct  8 06:04:28 np0005475493 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  8 06:04:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.088 2 INFO nova.compute.provider_config [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.103 2 DEBUG oslo_concurrency.lockutils [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.103 2 DEBUG oslo_concurrency.lockutils [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.103 2 DEBUG oslo_concurrency.lockutils [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.104 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.104 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.104 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.104 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.104 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.105 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.105 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.105 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.105 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.105 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.106 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.106 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.106 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.106 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.106 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.107 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.107 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.107 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.107 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.107 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.107 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.108 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.108 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.108 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.108 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.108 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.109 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.109 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.109 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.109 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.109 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.110 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.110 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.110 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.110 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.110 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.111 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.111 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.111 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.111 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.111 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.112 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.112 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.112 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.112 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.112 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.113 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.113 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.113 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.113 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.113 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.114 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.114 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.114 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.114 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.114 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.115 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.115 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.115 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.115 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.115 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.115 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.116 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.116 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.116 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.116 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.116 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.116 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.117 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.117 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.117 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.117 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.118 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.118 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.118 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.118 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.118 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.119 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.119 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.119 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.119 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.119 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.119 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.120 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.120 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.120 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.120 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.120 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.121 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.121 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.121 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.121 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.121 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.122 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.122 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.122 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.122 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.122 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.122 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.123 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.123 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.123 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.123 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.123 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.124 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.124 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.124 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.124 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.124 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.124 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.125 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.125 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.125 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.125 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.125 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.126 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.126 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.126 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.126 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.126 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.127 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.127 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.127 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.127 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.127 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.127 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.128 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.128 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.128 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.128 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.128 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.129 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.129 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.129 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.129 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.129 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.130 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.130 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.130 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.130 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.130 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.130 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.131 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.131 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.131 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.131 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.131 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.132 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.132 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.132 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.132 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.132 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.133 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.133 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.133 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.133 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.133 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.134 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.134 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.134 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.134 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.134 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.135 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.135 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.135 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.135 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.135 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.135 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.136 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.136 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.136 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.136 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.136 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.137 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.137 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.137 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.137 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.137 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.138 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.138 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.138 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.138 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.138 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.139 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.139 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.139 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.139 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.139 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.140 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.140 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.140 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.140 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.140 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.140 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.141 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.141 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.141 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.141 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.141 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.142 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.142 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.142 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.142 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.142 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.143 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.143 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.143 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.143 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.143 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.143 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.144 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.144 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.144 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.144 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.144 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.145 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.145 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.145 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.145 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.145 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.146 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.146 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.146 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.146 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.146 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.146 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.147 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.147 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.147 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.147 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.147 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.148 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.148 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.148 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.148 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.148 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.148 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.149 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.149 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.149 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.149 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.149 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.150 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.150 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.150 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.150 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.150 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.151 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.151 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.151 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.151 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.151 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.151 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.152 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.152 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.152 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.152 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.152 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.153 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.153 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.153 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.153 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.153 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.154 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.154 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.154 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.154 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.154 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.155 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.155 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.155 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.155 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.155 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.156 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.156 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.156 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.156 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.156 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.157 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.157 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.157 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.157 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.157 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.158 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.158 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.158 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.158 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.158 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.159 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.159 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.159 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.159 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.159 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.159 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.160 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.160 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.160 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.160 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.160 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.161 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.161 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.161 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.161 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.161 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.162 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.162 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.162 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.162 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.162 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.163 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.163 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.163 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.163 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.163 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.163 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.164 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.164 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.164 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.164 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.164 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.165 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.165 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.165 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.165 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.165 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.166 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.166 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.166 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.166 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.166 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.167 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.167 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.167 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.167 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.167 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.167 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.168 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.168 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.168 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.168 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.169 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.169 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.169 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.169 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.169 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.170 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.170 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.170 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.170 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.170 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.170 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.171 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.171 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.171 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.171 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.172 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.172 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.172 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.172 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.172 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.173 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.173 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.173 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.173 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.174 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.174 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.174 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.174 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.174 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.174 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.174 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.175 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.175 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.175 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.175 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.175 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.175 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.175 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.176 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.176 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.176 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.176 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.176 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.176 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.176 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.177 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.177 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.177 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.177 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.177 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.177 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.178 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.178 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.178 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.178 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.178 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.178 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.178 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.179 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.179 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.179 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.179 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.179 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.179 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.180 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.180 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.180 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.180 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.180 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.181 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.181 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.181 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.181 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.181 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.181 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.182 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.182 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.182 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.182 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.182 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.182 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.183 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.183 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.183 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.183 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.183 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.183 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.184 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.184 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.184 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.184 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.184 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.184 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.185 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.185 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.185 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.185 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.185 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.186 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.186 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.186 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.186 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.186 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.186 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.187 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.187 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.187 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.187 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.187 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.188 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.188 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.188 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.188 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.188 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.188 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.189 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.189 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.189 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.189 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.190 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.190 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.190 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.190 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.190 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.190 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.191 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.191 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.191 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.191 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.191 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.192 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.192 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.192 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.192 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.192 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.193 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.193 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.193 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.193 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.193 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.193 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.194 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.194 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.194 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.194 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.194 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.194 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.195 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.195 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.195 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.195 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.195 2 WARNING oslo_config.cfg [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct  8 06:04:29 np0005475493 nova_compute[261144]: live_migration_uri is deprecated for removal in favor of two other options that
Oct  8 06:04:29 np0005475493 nova_compute[261144]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct  8 06:04:29 np0005475493 nova_compute[261144]: and ``live_migration_inbound_addr`` respectively.
Oct  8 06:04:29 np0005475493 nova_compute[261144]: ).  Its value may be silently ignored in the future.#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.196 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.196 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.196 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.196 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.196 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.197 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.197 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.197 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.197 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.197 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.198 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.198 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.198 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.198 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.198 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.198 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.199 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.199 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.199 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.rbd_secret_uuid        = 787292cc-8154-50c4-9e00-e9be3e817149 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.199 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.199 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.199 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.200 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.200 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.200 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.200 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.200 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.200 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.200 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.201 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.201 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.201 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.201 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.201 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.202 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.202 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.202 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.202 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.202 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.202 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.203 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.203 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.203 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.203 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.203 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.204 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.204 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.204 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.204 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.204 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.204 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.205 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.205 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.205 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.205 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.205 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.206 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.206 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.206 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.206 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.206 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.207 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.207 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.207 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.207 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.207 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.207 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.208 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.208 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.208 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.208 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.208 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.208 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.208 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.209 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.209 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.209 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.209 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.209 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.209 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.210 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.210 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.210 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.210 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.210 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.211 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.211 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.211 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.211 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.211 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.211 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.212 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.212 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.212 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.212 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.212 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.212 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.212 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.212 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.213 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.213 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.213 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.213 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.213 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.213 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.214 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.214 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.214 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.214 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.214 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.214 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.215 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.215 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.215 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.215 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.215 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.215 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.216 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.216 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.216 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.216 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.216 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.216 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.216 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.217 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.217 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.217 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.217 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.217 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.217 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.218 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.218 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.218 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.218 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.218 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.218 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.219 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.219 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.219 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.219 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.219 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.220 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.220 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.220 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.220 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.220 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.220 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.220 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.221 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.221 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.221 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.221 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.221 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.221 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.222 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.222 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.222 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.222 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.222 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.223 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.223 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.223 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.223 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.223 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.223 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.224 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.224 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.224 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.224 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.224 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.224 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.224 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.225 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.225 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.225 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.225 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.225 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.225 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.226 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.226 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.226 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.226 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.226 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.226 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.227 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.227 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.227 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.227 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.227 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.227 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.228 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.228 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.228 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.228 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.228 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.228 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.229 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.229 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.229 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.229 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.229 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.229 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.230 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.230 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.230 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.230 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.230 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.230 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.231 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.231 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.231 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.231 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.231 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.231 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.231 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.232 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.232 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.232 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.232 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.232 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.232 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.232 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.233 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.233 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.233 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.233 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.233 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.233 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.234 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.234 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.234 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.234 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.234 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.234 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.235 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.235 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.235 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.235 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.235 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.235 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.236 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.236 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.236 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.236 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.236 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.236 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.237 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.237 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.237 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.237 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.237 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.238 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.238 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.238 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.238 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.238 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.238 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.239 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.239 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.239 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.239 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.239 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.239 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.239 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.240 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.240 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.240 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.240 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.240 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.240 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.240 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.241 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.241 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.241 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.241 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.241 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.241 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.242 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.242 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.242 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.242 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.242 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.242 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.243 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.243 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.243 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.243 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.243 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.243 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.244 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.244 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.244 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.244 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.244 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.244 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.245 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.245 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.245 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.245 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.245 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.245 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.246 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.246 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.246 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.246 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.246 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.246 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.247 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.247 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.247 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.247 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.247 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.247 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.248 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.248 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.248 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.248 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.248 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.249 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.249 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.249 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.249 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.249 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.250 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.250 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.250 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.250 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.250 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.251 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.251 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.251 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.251 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.251 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.251 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.252 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.252 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.252 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.252 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.252 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.253 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.253 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.253 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.253 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.254 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.254 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.254 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.254 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.254 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.255 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.255 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.255 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.255 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.255 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.255 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.255 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.256 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.256 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.256 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.256 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.256 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.256 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.257 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.257 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.257 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.257 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.257 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.258 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.258 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.258 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.258 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.258 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.258 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.258 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.259 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.259 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.259 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.259 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.259 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.259 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.259 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.260 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.260 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.260 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.260 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.260 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.260 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.261 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.261 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.261 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.261 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.261 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.262 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.262 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.262 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.262 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.262 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.263 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.263 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.263 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.263 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.263 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.264 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.264 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.264 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.264 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.264 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.264 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.264 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.265 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.265 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.265 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.265 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.265 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.265 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.265 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.266 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.266 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.266 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.266 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.266 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.266 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.267 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.267 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.267 2 DEBUG oslo_service.service [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.268 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.287 2 DEBUG nova.virt.libvirt.host [None req-8dca3da0-0fca-4b45-ac2e-54421443828a - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.288 2 DEBUG nova.virt.libvirt.host [None req-8dca3da0-0fca-4b45-ac2e-54421443828a - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.288 2 DEBUG nova.virt.libvirt.host [None req-8dca3da0-0fca-4b45-ac2e-54421443828a - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.288 2 DEBUG nova.virt.libvirt.host [None req-8dca3da0-0fca-4b45-ac2e-54421443828a - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct  8 06:04:29 np0005475493 systemd[1]: Starting libvirt QEMU daemon...
Oct  8 06:04:29 np0005475493 systemd[1]: Started libvirt QEMU daemon.
Oct  8 06:04:29 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Scheduled restart job, restart counter is at 9.
Oct  8 06:04:29 np0005475493 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 06:04:29 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.628s CPU time.
Oct  8 06:04:29 np0005475493 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.364 2 DEBUG nova.virt.libvirt.host [None req-8dca3da0-0fca-4b45-ac2e-54421443828a - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f6553ebb4c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.367 2 DEBUG nova.virt.libvirt.host [None req-8dca3da0-0fca-4b45-ac2e-54421443828a - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f6553ebb4c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.368 2 INFO nova.virt.libvirt.driver [None req-8dca3da0-0fca-4b45-ac2e-54421443828a - - - - - -] Connection event '1' reason 'None'#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.381 2 WARNING nova.virt.libvirt.driver [None req-8dca3da0-0fca-4b45-ac2e-54421443828a - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct  8 06:04:29 np0005475493 nova_compute[261144]: 2025-10-08 10:04:29.382 2 DEBUG nova.virt.libvirt.volume.mount [None req-8dca3da0-0fca-4b45-ac2e-54421443828a - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct  8 06:04:29 np0005475493 podman[262030]: 2025-10-08 10:04:29.559145679 +0000 UTC m=+0.042554919 container create dcd28dc3b591a8ad1bbef3775b31bab43e62da06b22c6c50b9245ad61c1024bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  8 06:04:29 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0368887a430f991d02246d619e7304973cce2d2c741718f4bff3761663df78c0/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:29 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0368887a430f991d02246d619e7304973cce2d2c741718f4bff3761663df78c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:29 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0368887a430f991d02246d619e7304973cce2d2c741718f4bff3761663df78c0/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:29 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0368887a430f991d02246d619e7304973cce2d2c741718f4bff3761663df78c0/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.uynkmx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:29 np0005475493 podman[262030]: 2025-10-08 10:04:29.619109521 +0000 UTC m=+0.102518791 container init dcd28dc3b591a8ad1bbef3775b31bab43e62da06b22c6c50b9245ad61c1024bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  8 06:04:29 np0005475493 podman[262030]: 2025-10-08 10:04:29.624666771 +0000 UTC m=+0.108076011 container start dcd28dc3b591a8ad1bbef3775b31bab43e62da06b22c6c50b9245ad61c1024bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:04:29 np0005475493 bash[262030]: dcd28dc3b591a8ad1bbef3775b31bab43e62da06b22c6c50b9245ad61c1024bd
Oct  8 06:04:29 np0005475493 podman[262030]: 2025-10-08 10:04:29.540161935 +0000 UTC m=+0.023571195 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:04:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:29 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  8 06:04:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:29 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  8 06:04:29 np0005475493 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 06:04:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:29 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  8 06:04:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:29 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  8 06:04:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:29 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  8 06:04:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:29 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  8 06:04:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:29 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  8 06:04:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:29 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:04:29 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v589: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:04:29 np0005475493 python3.9[262088]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  8 06:04:30 np0005475493 systemd[1]: Stopping nova_compute container...
Oct  8 06:04:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:04:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:30.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:04:30 np0005475493 nova_compute[261144]: 2025-10-08 10:04:30.117 2 DEBUG oslo_concurrency.lockutils [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:04:30 np0005475493 nova_compute[261144]: 2025-10-08 10:04:30.117 2 DEBUG oslo_concurrency.lockutils [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:04:30 np0005475493 nova_compute[261144]: 2025-10-08 10:04:30.117 2 DEBUG oslo_concurrency.lockutils [None req-256b64e0-19f7-4038-afc3-dee8db94f8e7 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:04:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:30.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:30 np0005475493 virtqemud[261885]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct  8 06:04:30 np0005475493 virtqemud[261885]: hostname: compute-0
Oct  8 06:04:30 np0005475493 virtqemud[261885]: End of file while reading data: Input/output error
Oct  8 06:04:30 np0005475493 systemd[1]: libpod-10bd2f5ab045dd8f96a9121ffea55bc5cdad29c81800340bfb8b3be5bc672ab2.scope: Deactivated successfully.
Oct  8 06:04:30 np0005475493 systemd[1]: libpod-10bd2f5ab045dd8f96a9121ffea55bc5cdad29c81800340bfb8b3be5bc672ab2.scope: Consumed 3.418s CPU time.
Oct  8 06:04:30 np0005475493 podman[262140]: 2025-10-08 10:04:30.669372594 +0000 UTC m=+0.626671672 container died 10bd2f5ab045dd8f96a9121ffea55bc5cdad29c81800340bfb8b3be5bc672ab2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  8 06:04:30 np0005475493 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-10bd2f5ab045dd8f96a9121ffea55bc5cdad29c81800340bfb8b3be5bc672ab2-userdata-shm.mount: Deactivated successfully.
Oct  8 06:04:30 np0005475493 systemd[1]: var-lib-containers-storage-overlay-15207b115f66f3b9ef265fd8e31c6a3ae8a40fba9e93cc4c4511e0cb7364338e-merged.mount: Deactivated successfully.
Oct  8 06:04:31 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  8 06:04:31 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 9009 writes, 35K keys, 9009 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 9009 writes, 1887 syncs, 4.77 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 764 writes, 1222 keys, 764 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s#012Interval WAL: 764 writes, 362 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct  8 06:04:31 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v590: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 06:04:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:32.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:32 np0005475493 podman[262169]: 2025-10-08 10:04:32.137729851 +0000 UTC m=+0.046397195 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  8 06:04:32 np0005475493 podman[262140]: 2025-10-08 10:04:32.190680456 +0000 UTC m=+2.147979534 container cleanup 10bd2f5ab045dd8f96a9121ffea55bc5cdad29c81800340bfb8b3be5bc672ab2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=nova_compute, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:04:32 np0005475493 podman[262140]: nova_compute
Oct  8 06:04:32 np0005475493 podman[262191]: nova_compute
Oct  8 06:04:32 np0005475493 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct  8 06:04:32 np0005475493 systemd[1]: Stopped nova_compute container.
Oct  8 06:04:32 np0005475493 systemd[1]: Starting nova_compute container...
Oct  8 06:04:32 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:04:32 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15207b115f66f3b9ef265fd8e31c6a3ae8a40fba9e93cc4c4511e0cb7364338e/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:32 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15207b115f66f3b9ef265fd8e31c6a3ae8a40fba9e93cc4c4511e0cb7364338e/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:32 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15207b115f66f3b9ef265fd8e31c6a3ae8a40fba9e93cc4c4511e0cb7364338e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:32 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15207b115f66f3b9ef265fd8e31c6a3ae8a40fba9e93cc4c4511e0cb7364338e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:32 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15207b115f66f3b9ef265fd8e31c6a3ae8a40fba9e93cc4c4511e0cb7364338e/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:32 np0005475493 podman[262204]: 2025-10-08 10:04:32.409258176 +0000 UTC m=+0.128010818 container init 10bd2f5ab045dd8f96a9121ffea55bc5cdad29c81800340bfb8b3be5bc672ab2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, io.buildah.version=1.41.3, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  8 06:04:32 np0005475493 podman[262204]: 2025-10-08 10:04:32.414584849 +0000 UTC m=+0.133337461 container start 10bd2f5ab045dd8f96a9121ffea55bc5cdad29c81800340bfb8b3be5bc672ab2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=edpm)
Oct  8 06:04:32 np0005475493 nova_compute[262220]: + sudo -E kolla_set_configs
Oct  8 06:04:32 np0005475493 podman[262204]: nova_compute
Oct  8 06:04:32 np0005475493 systemd[1]: Started nova_compute container.
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Validating config file
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Copying service configuration files
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Deleting /etc/ceph
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Creating directory /etc/ceph
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Setting permission for /etc/ceph
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Writing out command to execute
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  8 06:04:32 np0005475493 nova_compute[262220]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  8 06:04:32 np0005475493 nova_compute[262220]: ++ cat /run_command
Oct  8 06:04:32 np0005475493 nova_compute[262220]: + CMD=nova-compute
Oct  8 06:04:32 np0005475493 nova_compute[262220]: + ARGS=
Oct  8 06:04:32 np0005475493 nova_compute[262220]: + sudo kolla_copy_cacerts
Oct  8 06:04:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:32.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:32 np0005475493 nova_compute[262220]: + [[ ! -n '' ]]
Oct  8 06:04:32 np0005475493 nova_compute[262220]: + . kolla_extend_start
Oct  8 06:04:32 np0005475493 nova_compute[262220]: Running command: 'nova-compute'
Oct  8 06:04:32 np0005475493 nova_compute[262220]: + echo 'Running command: '\''nova-compute'\'''
Oct  8 06:04:32 np0005475493 nova_compute[262220]: + umask 0022
Oct  8 06:04:32 np0005475493 nova_compute[262220]: + exec nova-compute
Oct  8 06:04:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:04:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:04:33 np0005475493 podman[262258]: 2025-10-08 10:04:33.931829289 +0000 UTC m=+0.088874490 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  8 06:04:33 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v591: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 511 B/s wr, 1 op/s
Oct  8 06:04:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:04:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:04:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:34.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:04:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:34.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:34 np0005475493 nova_compute[262220]: 2025-10-08 10:04:34.708 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  8 06:04:34 np0005475493 nova_compute[262220]: 2025-10-08 10:04:34.709 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  8 06:04:34 np0005475493 nova_compute[262220]: 2025-10-08 10:04:34.709 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  8 06:04:34 np0005475493 nova_compute[262220]: 2025-10-08 10:04:34.709 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct  8 06:04:34 np0005475493 nova_compute[262220]: 2025-10-08 10:04:34.848 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:04:34 np0005475493 nova_compute[262220]: 2025-10-08 10:04:34.877 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:04:34 np0005475493 python3.9[262408]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  8 06:04:35 np0005475493 systemd[1]: Started libpod-conmon-17713851e79d1ffcd0dd8102ba919b86e4d786f034fb81664009fcbd548787f8.scope.
Oct  8 06:04:35 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:04:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d531c804c7a8b89836f21851bdd3a7c846cca0fd115fa1914c09fa20f892011c/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d531c804c7a8b89836f21851bdd3a7c846cca0fd115fa1914c09fa20f892011c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d531c804c7a8b89836f21851bdd3a7c846cca0fd115fa1914c09fa20f892011c/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.310 2 INFO nova.virt.driver [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct  8 06:04:35 np0005475493 podman[262436]: 2025-10-08 10:04:35.356387916 +0000 UTC m=+0.363153845 container init 17713851e79d1ffcd0dd8102ba919b86e4d786f034fb81664009fcbd548787f8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251001)
Oct  8 06:04:35 np0005475493 podman[262436]: 2025-10-08 10:04:35.365105269 +0000 UTC m=+0.371871198 container start 17713851e79d1ffcd0dd8102ba919b86e4d786f034fb81664009fcbd548787f8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=edpm, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute_init, managed_by=edpm_ansible)
Oct  8 06:04:35 np0005475493 python3.9[262408]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct  8 06:04:35 np0005475493 nova_compute_init[262458]: INFO:nova_statedir:Applying nova statedir ownership
Oct  8 06:04:35 np0005475493 nova_compute_init[262458]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct  8 06:04:35 np0005475493 nova_compute_init[262458]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct  8 06:04:35 np0005475493 nova_compute_init[262458]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct  8 06:04:35 np0005475493 nova_compute_init[262458]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct  8 06:04:35 np0005475493 nova_compute_init[262458]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct  8 06:04:35 np0005475493 nova_compute_init[262458]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct  8 06:04:35 np0005475493 nova_compute_init[262458]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct  8 06:04:35 np0005475493 nova_compute_init[262458]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct  8 06:04:35 np0005475493 nova_compute_init[262458]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct  8 06:04:35 np0005475493 nova_compute_init[262458]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct  8 06:04:35 np0005475493 nova_compute_init[262458]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct  8 06:04:35 np0005475493 nova_compute_init[262458]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct  8 06:04:35 np0005475493 nova_compute_init[262458]: INFO:nova_statedir:Nova statedir ownership complete
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.415 2 INFO nova.compute.provider_config [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct  8 06:04:35 np0005475493 systemd[1]: libpod-17713851e79d1ffcd0dd8102ba919b86e4d786f034fb81664009fcbd548787f8.scope: Deactivated successfully.
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.428 2 DEBUG oslo_concurrency.lockutils [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.429 2 DEBUG oslo_concurrency.lockutils [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.429 2 DEBUG oslo_concurrency.lockutils [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.429 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.429 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.429 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.430 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.430 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.430 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.430 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.430 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.430 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.430 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.431 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.431 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.431 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.431 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.431 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.431 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.432 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.432 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.432 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.432 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.432 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.432 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.433 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.433 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.433 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.433 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.433 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.433 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.434 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.434 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.434 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 podman[262459]: 2025-10-08 10:04:35.434505167 +0000 UTC m=+0.026107297 container died 17713851e79d1ffcd0dd8102ba919b86e4d786f034fb81664009fcbd548787f8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, tcib_managed=true)
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.434 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.434 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.434 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.435 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.435 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.435 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.435 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.435 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.435 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.436 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.436 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.436 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.436 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.436 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.436 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.437 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.437 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.437 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.437 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.437 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.437 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.437 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.438 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.438 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.438 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.438 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.438 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.438 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.439 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.439 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.439 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.439 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.439 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.439 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.439 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.440 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.440 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.440 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.440 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.440 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.440 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.441 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.441 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.441 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.441 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.441 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.441 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.442 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.442 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.442 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.442 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.442 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.443 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.443 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.443 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.443 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.443 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.444 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.444 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.444 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.444 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.444 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.445 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.445 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.445 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.445 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.445 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.446 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.446 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.446 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.446 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.446 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.446 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.447 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.447 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.447 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.447 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.447 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.447 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.448 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.448 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.448 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.448 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.448 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.448 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.448 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.448 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.449 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.449 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.449 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.449 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.449 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.449 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.449 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.450 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.450 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.450 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.450 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.450 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.450 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.450 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.450 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.451 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.451 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.451 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.451 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.451 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.451 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.452 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.452 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.452 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.452 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.452 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.452 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.452 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.453 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.453 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.453 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.453 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.453 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.454 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.454 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.454 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.454 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.454 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.454 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.454 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.455 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.455 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.455 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.455 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.455 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.455 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.455 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.456 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.456 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.456 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.456 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.456 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.456 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.457 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.457 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.457 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.457 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.457 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.458 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.458 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.458 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.458 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.458 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.458 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.458 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.459 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.459 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.459 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.459 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.459 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.459 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.460 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.460 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.460 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.460 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.460 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.460 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.460 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.461 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.461 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.461 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.461 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.461 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.461 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.461 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.462 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.462 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.462 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.462 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.462 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.462 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.462 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.463 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.463 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.463 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.463 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.463 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.463 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.464 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.464 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.464 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.464 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.464 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.464 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.464 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.465 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.465 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.465 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.465 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.465 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.465 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.465 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.466 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.466 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.466 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.466 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.466 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.466 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.467 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.467 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.467 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.467 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.467 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.467 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.467 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.468 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.468 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.468 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.468 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.468 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.468 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.468 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.469 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.469 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.469 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.469 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.469 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.469 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.469 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.470 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.470 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.470 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.470 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.470 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.470 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.471 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.471 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.471 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.471 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.471 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.471 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.471 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.472 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.472 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.472 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.472 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.472 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.472 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.472 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.472 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.473 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.473 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.473 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.473 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.473 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.473 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.473 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.474 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.474 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.474 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.474 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.474 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.474 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.474 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.475 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.475 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.475 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.475 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.475 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.475 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.476 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.476 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.476 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.476 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.476 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.476 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.476 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.477 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.477 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.477 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.477 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.477 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.477 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.477 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.478 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.478 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.478 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.478 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.478 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.478 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.478 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.478 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.479 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.479 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.479 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.479 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.479 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.479 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.479 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.480 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.480 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.480 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.480 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.480 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.480 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.480 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.481 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.481 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.481 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.481 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.481 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.481 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.481 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.481 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.482 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.482 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.482 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.482 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.482 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.482 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.482 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.483 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.483 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.483 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.483 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.483 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.483 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.484 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.484 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.484 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.484 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.484 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.484 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.485 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.485 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.485 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.485 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.485 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.485 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.485 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.485 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.486 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.486 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.486 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.486 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.486 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.486 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.486 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.487 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.487 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.487 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.487 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.487 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.487 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.487 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.488 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.488 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.488 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.488 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.488 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.488 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.488 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.489 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.489 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.489 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.489 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.489 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.489 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.489 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.490 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.490 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.490 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.490 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.490 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.490 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.490 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.491 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.491 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.491 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.491 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.491 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.491 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.491 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.492 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.492 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.492 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.492 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.492 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.492 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.492 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.492 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.493 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.493 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.493 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.493 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.493 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.493 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.493 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.494 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.494 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.494 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.494 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.494 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.494 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.494 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.495 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.495 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.495 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.495 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.495 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.495 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.495 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.496 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.496 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.496 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.496 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.496 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.496 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.496 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.497 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.497 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.497 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.497 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.497 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.497 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.497 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.498 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.498 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.498 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.498 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.498 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.498 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.498 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.499 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.499 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.499 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.499 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.499 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.499 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.499 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.500 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.500 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.500 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.500 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.500 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.500 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.500 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.501 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.501 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.501 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.501 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.501 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.501 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.501 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.501 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.502 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.502 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.502 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.502 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.502 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.502 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.502 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.503 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.503 2 WARNING oslo_config.cfg [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct  8 06:04:35 np0005475493 nova_compute[262220]: live_migration_uri is deprecated for removal in favor of two other options that
Oct  8 06:04:35 np0005475493 nova_compute[262220]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct  8 06:04:35 np0005475493 nova_compute[262220]: and ``live_migration_inbound_addr`` respectively.
Oct  8 06:04:35 np0005475493 nova_compute[262220]: ).  Its value may be silently ignored in the future.#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.503 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.503 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.503 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.504 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.504 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.504 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.504 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.504 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.504 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.505 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.505 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.505 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.505 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.505 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.505 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.505 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.506 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.506 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.506 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.rbd_secret_uuid        = 787292cc-8154-50c4-9e00-e9be3e817149 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.506 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.506 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.506 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.506 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.507 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.507 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.507 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.507 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.507 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.507 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.508 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.508 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.508 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.508 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.508 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.508 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.508 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.509 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.509 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.509 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.509 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.509 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.509 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.509 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.510 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.510 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.510 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.510 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.510 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.510 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.510 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.511 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.511 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.511 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.511 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.511 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.511 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.511 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.511 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.512 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.512 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.512 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.512 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.512 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.512 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.512 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.513 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.513 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.513 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.513 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.513 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.513 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.513 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.514 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.514 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.514 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.514 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.514 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.514 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.514 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.514 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.515 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.515 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.515 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.515 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.515 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.515 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.515 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.516 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.516 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.516 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.516 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.516 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.516 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.516 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.517 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.517 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.517 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.517 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.517 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.517 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.517 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.518 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.518 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.518 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.518 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.518 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.518 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.518 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.518 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.519 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.519 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.519 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.519 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.519 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.519 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.519 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.520 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.520 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.520 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.520 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.520 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.520 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.520 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.521 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.521 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.521 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.521 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.521 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.521 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.521 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.521 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.522 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.522 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.522 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.522 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.522 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.522 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.523 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.523 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.523 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.523 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.523 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.523 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.524 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.524 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.524 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.524 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.524 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.524 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.524 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.525 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.525 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.525 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.525 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.525 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.525 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.525 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.525 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.526 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.526 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.526 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.526 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.526 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.526 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.526 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.527 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.527 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.527 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.527 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.527 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.527 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.527 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.528 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.528 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.528 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.528 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.528 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.528 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.528 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.529 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.529 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.529 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.529 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.529 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.529 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.530 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.530 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.530 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.530 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.530 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.530 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.530 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.530 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.531 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.531 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.531 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.531 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.531 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.531 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.532 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.532 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.532 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.532 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.532 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.532 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.532 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.533 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.533 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.533 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.533 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.533 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.533 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.533 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.534 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.534 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.534 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.534 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.534 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.534 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.534 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.535 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.535 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.535 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.535 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.535 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.535 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.535 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.536 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.536 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.536 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.536 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.536 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.536 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.536 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.537 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.537 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.537 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.537 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.537 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.537 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.538 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.538 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.538 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.538 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.538 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.538 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.538 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.539 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.539 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.539 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.539 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.539 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.539 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.540 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.540 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.540 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.540 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.540 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.540 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.540 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.541 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.541 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.541 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.541 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.541 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.541 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.542 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.542 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.542 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.542 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.542 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.542 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.543 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.543 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.543 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.543 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.543 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.543 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.543 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.544 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.544 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.544 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.544 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.544 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.544 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.544 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.545 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.545 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.545 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.545 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.545 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.545 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.545 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.546 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.546 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.546 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.546 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.546 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.546 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.546 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.547 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.547 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.547 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.547 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.547 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.547 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.547 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.548 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.548 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.548 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.548 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.548 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.548 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.548 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.549 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.549 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.549 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.549 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.549 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.549 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.549 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.550 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.550 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.550 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.550 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.550 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.550 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.550 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.550 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.551 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.551 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.551 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.551 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.551 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.551 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.551 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.552 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.552 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.552 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.552 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.552 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.552 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.552 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-17713851e79d1ffcd0dd8102ba919b86e4d786f034fb81664009fcbd548787f8-userdata-shm.mount: Deactivated successfully.
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.553 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.553 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.553 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.553 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.553 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.553 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.554 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.554 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.554 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.554 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.554 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.554 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.554 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.555 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.555 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.555 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.555 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.555 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.555 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.556 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.556 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.556 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.556 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.556 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.556 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.556 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.556 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 systemd[1]: var-lib-containers-storage-overlay-d531c804c7a8b89836f21851bdd3a7c846cca0fd115fa1914c09fa20f892011c-merged.mount: Deactivated successfully.
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.557 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.557 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.557 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.557 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.557 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.557 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.557 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.558 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.558 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.558 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.558 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.558 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.558 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.558 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.558 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.559 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.559 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.559 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.559 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.559 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.559 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.560 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.560 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.560 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.560 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.560 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.560 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.560 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.561 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.561 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.561 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.561 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.561 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.561 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.561 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.562 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.562 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.562 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.562 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.562 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.562 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.562 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.563 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.563 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.563 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.563 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.563 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.563 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.563 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.564 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.564 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.564 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.564 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.564 2 DEBUG oslo_service.service [None req-75be25b4-e8df-4549-9ea8-07ac674da96d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.565 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.579 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.580 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.580 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.580 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct  8 06:04:35 np0005475493 podman[262469]: 2025-10-08 10:04:35.584860907 +0000 UTC m=+0.151931442 container cleanup 17713851e79d1ffcd0dd8102ba919b86e4d786f034fb81664009fcbd548787f8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.591 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fc2df2f24f0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct  8 06:04:35 np0005475493 systemd[1]: libpod-conmon-17713851e79d1ffcd0dd8102ba919b86e4d786f034fb81664009fcbd548787f8.scope: Deactivated successfully.
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.594 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fc2df2f24f0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.595 2 INFO nova.virt.libvirt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Connection event '1' reason 'None'#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.601 2 INFO nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Libvirt host capabilities <capabilities>
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <host>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <uuid>a1287f1c-5981-4c2e-a0ce-6a9c84016045</uuid>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <cpu>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <arch>x86_64</arch>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model>EPYC-Rome-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <vendor>AMD</vendor>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <microcode version='16777317'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <signature family='23' model='49' stepping='0'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <maxphysaddr mode='emulate' bits='40'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature name='x2apic'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature name='tsc-deadline'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature name='osxsave'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature name='hypervisor'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature name='tsc_adjust'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature name='spec-ctrl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature name='stibp'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature name='arch-capabilities'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature name='ssbd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature name='cmp_legacy'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature name='topoext'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature name='virt-ssbd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature name='lbrv'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature name='tsc-scale'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature name='vmcb-clean'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature name='pause-filter'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature name='pfthreshold'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature name='svme-addr-chk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature name='rdctl-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature name='skip-l1dfl-vmentry'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature name='mds-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature name='pschange-mc-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <pages unit='KiB' size='4'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <pages unit='KiB' size='2048'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <pages unit='KiB' size='1048576'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </cpu>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <power_management>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <suspend_mem/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </power_management>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <iommu support='no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <migration_features>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <live/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <uri_transports>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <uri_transport>tcp</uri_transport>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <uri_transport>rdma</uri_transport>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </uri_transports>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </migration_features>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <topology>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <cells num='1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <cell id='0'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:          <memory unit='KiB'>7864104</memory>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:          <pages unit='KiB' size='4'>1966026</pages>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:          <pages unit='KiB' size='2048'>0</pages>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:          <pages unit='KiB' size='1048576'>0</pages>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:          <distances>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:            <sibling id='0' value='10'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:          </distances>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:          <cpus num='8'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:          </cpus>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        </cell>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </cells>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </topology>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <cache>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </cache>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <secmodel>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model>selinux</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <doi>0</doi>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </secmodel>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <secmodel>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model>dac</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <doi>0</doi>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </secmodel>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  </host>
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <guest>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <os_type>hvm</os_type>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <arch name='i686'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <wordsize>32</wordsize>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <domain type='qemu'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <domain type='kvm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </arch>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <features>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <pae/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <nonpae/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <acpi default='on' toggle='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <apic default='on' toggle='no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <cpuselection/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <deviceboot/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <disksnapshot default='on' toggle='no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <externalSnapshot/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </features>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  </guest>
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <guest>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <os_type>hvm</os_type>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <arch name='x86_64'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <wordsize>64</wordsize>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <domain type='qemu'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <domain type='kvm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </arch>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <features>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <acpi default='on' toggle='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <apic default='on' toggle='no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <cpuselection/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <deviceboot/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <disksnapshot default='on' toggle='no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <externalSnapshot/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </features>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  </guest>
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 
Oct  8 06:04:35 np0005475493 nova_compute[262220]: </capabilities>
Oct  8 06:04:35 np0005475493 nova_compute[262220]: #033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.607 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.609 2 WARNING nova.virt.libvirt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.609 2 DEBUG nova.virt.libvirt.volume.mount [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.637 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct  8 06:04:35 np0005475493 nova_compute[262220]: <domainCapabilities>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <path>/usr/libexec/qemu-kvm</path>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <domain>kvm</domain>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <arch>i686</arch>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <vcpu max='4096'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <iothreads supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <os supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <enum name='firmware'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <loader supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='type'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>rom</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>pflash</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='readonly'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>yes</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>no</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='secure'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>no</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </loader>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  </os>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <cpu>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <mode name='host-passthrough' supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='hostPassthroughMigratable'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>on</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>off</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </mode>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <mode name='maximum' supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='maximumMigratable'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>on</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>off</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </mode>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <mode name='host-model' supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <vendor>AMD</vendor>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='x2apic'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='tsc-deadline'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='hypervisor'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='tsc_adjust'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='spec-ctrl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='stibp'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='arch-capabilities'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='ssbd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='cmp_legacy'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='overflow-recov'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='succor'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='ibrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='amd-ssbd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='virt-ssbd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='lbrv'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='tsc-scale'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='vmcb-clean'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='flushbyasid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='pause-filter'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='pfthreshold'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='svme-addr-chk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='rdctl-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='mds-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='pschange-mc-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='gds-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='rfds-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='disable' name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </mode>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <mode name='custom' supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-noTSX'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server-v5'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cooperlake'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cooperlake-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cooperlake-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Denverton'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mpx'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Denverton-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mpx'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Denverton-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Denverton-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Dhyana-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Genoa'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amd-psfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='auto-ibrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='no-nested-data-bp'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='null-sel-clr-base'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='stibp-always-on'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Genoa-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amd-psfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='auto-ibrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='no-nested-data-bp'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='null-sel-clr-base'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='stibp-always-on'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Milan'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Milan-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Milan-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amd-psfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='no-nested-data-bp'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='null-sel-clr-base'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='stibp-always-on'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Rome'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Rome-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Rome-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Rome-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='GraniteRapids'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mcdt-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pbrsb-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='prefetchiti'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='GraniteRapids-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mcdt-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pbrsb-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='prefetchiti'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='GraniteRapids-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx10'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx10-128'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx10-256'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx10-512'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mcdt-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pbrsb-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='prefetchiti'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-noTSX'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-noTSX'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v5'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v6'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v7'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='IvyBridge'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='IvyBridge-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='IvyBridge-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='IvyBridge-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='KnightsMill'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-4fmaps'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-4vnniw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512er'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512pf'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='KnightsMill-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-4fmaps'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-4vnniw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512er'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512pf'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Opteron_G4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fma4'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xop'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Opteron_G4-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fma4'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xop'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Opteron_G5'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fma4'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tbm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xop'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Opteron_G5-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fma4'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tbm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xop'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='SapphireRapids'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='SapphireRapids-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='SapphireRapids-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='SapphireRapids-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='SierraForest'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-ne-convert'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cmpccxadd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mcdt-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pbrsb-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='SierraForest-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-ne-convert'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cmpccxadd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mcdt-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pbrsb-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-v5'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Snowridge'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='core-capability'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mpx'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='split-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Snowridge-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='core-capability'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mpx'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='split-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Snowridge-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='core-capability'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='split-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Snowridge-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='core-capability'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='split-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Snowridge-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='athlon'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnow'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnowext'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='athlon-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnow'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnowext'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='core2duo'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='core2duo-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='coreduo'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='coreduo-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='n270'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='n270-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='phenom'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnow'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnowext'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='phenom-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnow'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnowext'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </mode>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  </cpu>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <memoryBacking supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <enum name='sourceType'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <value>file</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <value>anonymous</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <value>memfd</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  </memoryBacking>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <devices>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <disk supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='diskDevice'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>disk</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>cdrom</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>floppy</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>lun</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='bus'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>fdc</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>scsi</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>usb</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>sata</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='model'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio-transitional</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio-non-transitional</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </disk>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <graphics supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='type'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>vnc</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>egl-headless</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>dbus</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </graphics>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <video supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='modelType'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>vga</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>cirrus</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>none</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>bochs</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>ramfb</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </video>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <hostdev supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='mode'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>subsystem</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='startupPolicy'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>default</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>mandatory</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>requisite</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>optional</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='subsysType'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>usb</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>pci</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>scsi</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='capsType'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='pciBackend'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </hostdev>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <rng supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='model'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio-transitional</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio-non-transitional</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='backendModel'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>random</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>egd</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>builtin</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </rng>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <filesystem supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='driverType'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>path</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>handle</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtiofs</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </filesystem>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <tpm supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='model'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>tpm-tis</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>tpm-crb</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='backendModel'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>emulator</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>external</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='backendVersion'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>2.0</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </tpm>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <redirdev supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='bus'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>usb</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </redirdev>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <channel supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='type'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>pty</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>unix</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </channel>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <crypto supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='model'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='type'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>qemu</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='backendModel'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>builtin</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </crypto>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <interface supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='backendType'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>default</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>passt</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </interface>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <panic supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='model'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>isa</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>hyperv</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </panic>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  </devices>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <features>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <gic supported='no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <vmcoreinfo supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <genid supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <backingStoreInput supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <backup supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <async-teardown supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <ps2 supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <sev supported='no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <sgx supported='no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <hyperv supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='features'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>relaxed</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>vapic</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>spinlocks</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>vpindex</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>runtime</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>synic</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>stimer</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>reset</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>vendor_id</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>frequencies</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>reenlightenment</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>tlbflush</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>ipi</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>avic</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>emsr_bitmap</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>xmm_input</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </hyperv>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <launchSecurity supported='no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  </features>
Oct  8 06:04:35 np0005475493 nova_compute[262220]: </domainCapabilities>
Oct  8 06:04:35 np0005475493 nova_compute[262220]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.642 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct  8 06:04:35 np0005475493 nova_compute[262220]: <domainCapabilities>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <path>/usr/libexec/qemu-kvm</path>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <domain>kvm</domain>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <arch>i686</arch>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <vcpu max='240'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <iothreads supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <os supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <enum name='firmware'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <loader supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='type'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>rom</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>pflash</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='readonly'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>yes</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>no</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='secure'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>no</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </loader>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  </os>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <cpu>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <mode name='host-passthrough' supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='hostPassthroughMigratable'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>on</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>off</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </mode>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <mode name='maximum' supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='maximumMigratable'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>on</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>off</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </mode>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <mode name='host-model' supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <vendor>AMD</vendor>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='x2apic'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='tsc-deadline'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='hypervisor'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='tsc_adjust'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='spec-ctrl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='stibp'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='arch-capabilities'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='ssbd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='cmp_legacy'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='overflow-recov'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='succor'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='ibrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='amd-ssbd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='virt-ssbd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='lbrv'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='tsc-scale'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='vmcb-clean'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='flushbyasid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='pause-filter'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='pfthreshold'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='svme-addr-chk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='rdctl-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='mds-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='pschange-mc-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='gds-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='rfds-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='disable' name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </mode>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <mode name='custom' supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-noTSX'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server-v5'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cooperlake'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cooperlake-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cooperlake-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Denverton'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mpx'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Denverton-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mpx'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Denverton-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Denverton-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Dhyana-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Genoa'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amd-psfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='auto-ibrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='no-nested-data-bp'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='null-sel-clr-base'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='stibp-always-on'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Genoa-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amd-psfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='auto-ibrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='no-nested-data-bp'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='null-sel-clr-base'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='stibp-always-on'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Milan'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Milan-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Milan-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amd-psfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='no-nested-data-bp'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='null-sel-clr-base'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='stibp-always-on'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Rome'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Rome-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Rome-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Rome-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='GraniteRapids'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mcdt-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pbrsb-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='prefetchiti'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='GraniteRapids-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mcdt-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pbrsb-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='prefetchiti'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='GraniteRapids-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx10'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx10-128'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx10-256'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx10-512'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mcdt-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pbrsb-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='prefetchiti'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-noTSX'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-noTSX'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v5'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v6'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v7'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='IvyBridge'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='IvyBridge-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='IvyBridge-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='IvyBridge-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='KnightsMill'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-4fmaps'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-4vnniw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512er'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512pf'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='KnightsMill-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-4fmaps'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-4vnniw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512er'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512pf'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Opteron_G4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fma4'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xop'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Opteron_G4-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fma4'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xop'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Opteron_G5'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fma4'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tbm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xop'/>
Oct  8 06:04:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:04:35] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Opteron_G5-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fma4'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tbm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xop'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='SapphireRapids'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:04:35] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='SapphireRapids-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='SapphireRapids-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='SapphireRapids-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='SierraForest'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-ne-convert'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cmpccxadd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mcdt-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pbrsb-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='SierraForest-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-ne-convert'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cmpccxadd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mcdt-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pbrsb-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-v5'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Snowridge'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='core-capability'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mpx'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='split-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Snowridge-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='core-capability'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mpx'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='split-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Snowridge-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='core-capability'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='split-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Snowridge-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='core-capability'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='split-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Snowridge-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='athlon'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnow'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnowext'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='athlon-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnow'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnowext'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='core2duo'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='core2duo-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='coreduo'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='coreduo-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='n270'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='n270-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='phenom'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnow'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnowext'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='phenom-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnow'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnowext'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </mode>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  </cpu>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <memoryBacking supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <enum name='sourceType'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <value>file</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <value>anonymous</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <value>memfd</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  </memoryBacking>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <devices>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <disk supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='diskDevice'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>disk</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>cdrom</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>floppy</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>lun</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='bus'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>ide</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>fdc</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>scsi</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>usb</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>sata</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='model'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio-transitional</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio-non-transitional</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </disk>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <graphics supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='type'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>vnc</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>egl-headless</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>dbus</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </graphics>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <video supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='modelType'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>vga</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>cirrus</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>none</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>bochs</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>ramfb</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </video>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <hostdev supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='mode'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>subsystem</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='startupPolicy'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>default</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>mandatory</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>requisite</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>optional</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='subsysType'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>usb</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>pci</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>scsi</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='capsType'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='pciBackend'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </hostdev>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <rng supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='model'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio-transitional</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio-non-transitional</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='backendModel'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>random</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>egd</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>builtin</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </rng>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <filesystem supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='driverType'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>path</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>handle</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtiofs</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </filesystem>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <tpm supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='model'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>tpm-tis</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>tpm-crb</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='backendModel'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>emulator</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>external</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='backendVersion'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>2.0</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </tpm>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <redirdev supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='bus'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>usb</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </redirdev>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <channel supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='type'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>pty</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>unix</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </channel>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <crypto supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='model'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='type'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>qemu</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='backendModel'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>builtin</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </crypto>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <interface supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='backendType'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>default</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>passt</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </interface>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <panic supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='model'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>isa</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>hyperv</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </panic>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  </devices>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <features>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <gic supported='no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <vmcoreinfo supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <genid supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <backingStoreInput supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <backup supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <async-teardown supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <ps2 supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <sev supported='no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <sgx supported='no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <hyperv supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='features'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>relaxed</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>vapic</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>spinlocks</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>vpindex</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>runtime</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>synic</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>stimer</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>reset</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>vendor_id</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>frequencies</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>reenlightenment</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>tlbflush</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>ipi</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>avic</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>emsr_bitmap</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>xmm_input</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </hyperv>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <launchSecurity supported='no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  </features>
Oct  8 06:04:35 np0005475493 nova_compute[262220]: </domainCapabilities>
Oct  8 06:04:35 np0005475493 nova_compute[262220]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.670 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.674 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct  8 06:04:35 np0005475493 nova_compute[262220]: <domainCapabilities>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <path>/usr/libexec/qemu-kvm</path>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <domain>kvm</domain>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <arch>x86_64</arch>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <vcpu max='4096'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <iothreads supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <os supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <enum name='firmware'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <value>efi</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <loader supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='type'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>rom</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>pflash</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='readonly'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>yes</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>no</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='secure'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>yes</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>no</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </loader>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  </os>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <cpu>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <mode name='host-passthrough' supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='hostPassthroughMigratable'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>on</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>off</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </mode>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <mode name='maximum' supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='maximumMigratable'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>on</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>off</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </mode>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <mode name='host-model' supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <vendor>AMD</vendor>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='x2apic'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='tsc-deadline'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='hypervisor'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='tsc_adjust'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='spec-ctrl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='stibp'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='arch-capabilities'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='ssbd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='cmp_legacy'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='overflow-recov'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='succor'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='ibrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='amd-ssbd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='virt-ssbd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='lbrv'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='tsc-scale'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='vmcb-clean'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='flushbyasid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='pause-filter'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='pfthreshold'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='svme-addr-chk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='rdctl-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='mds-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='pschange-mc-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='gds-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='rfds-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='disable' name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </mode>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <mode name='custom' supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-noTSX'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server-v5'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cooperlake'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cooperlake-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cooperlake-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Denverton'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mpx'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Denverton-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mpx'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Denverton-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Denverton-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Dhyana-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Genoa'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amd-psfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='auto-ibrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='no-nested-data-bp'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='null-sel-clr-base'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='stibp-always-on'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Genoa-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amd-psfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='auto-ibrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='no-nested-data-bp'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='null-sel-clr-base'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='stibp-always-on'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Milan'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Milan-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Milan-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amd-psfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='no-nested-data-bp'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='null-sel-clr-base'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='stibp-always-on'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Rome'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Rome-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Rome-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Rome-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='GraniteRapids'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mcdt-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pbrsb-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='prefetchiti'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='GraniteRapids-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mcdt-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pbrsb-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='prefetchiti'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='GraniteRapids-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx10'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx10-128'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx10-256'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx10-512'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mcdt-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pbrsb-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='prefetchiti'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-noTSX'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-noTSX'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v5'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v6'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v7'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='IvyBridge'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='IvyBridge-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='IvyBridge-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='IvyBridge-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='KnightsMill'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-4fmaps'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-4vnniw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512er'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512pf'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='KnightsMill-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-4fmaps'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-4vnniw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512er'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512pf'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Opteron_G4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fma4'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xop'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Opteron_G4-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fma4'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xop'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Opteron_G5'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fma4'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tbm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xop'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Opteron_G5-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fma4'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tbm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xop'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='SapphireRapids'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='SapphireRapids-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='SapphireRapids-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='SapphireRapids-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='SierraForest'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-ne-convert'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cmpccxadd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mcdt-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pbrsb-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='SierraForest-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-ne-convert'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cmpccxadd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mcdt-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pbrsb-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-v5'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Snowridge'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='core-capability'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mpx'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='split-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Snowridge-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='core-capability'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mpx'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='split-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Snowridge-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='core-capability'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='split-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Snowridge-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='core-capability'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='split-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Snowridge-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='athlon'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnow'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnowext'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='athlon-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnow'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnowext'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='core2duo'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='core2duo-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='coreduo'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='coreduo-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='n270'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='n270-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='phenom'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnow'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnowext'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='phenom-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnow'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnowext'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </mode>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  </cpu>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <memoryBacking supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <enum name='sourceType'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <value>file</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <value>anonymous</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <value>memfd</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  </memoryBacking>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <devices>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <disk supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='diskDevice'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>disk</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>cdrom</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>floppy</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>lun</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='bus'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>fdc</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>scsi</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>usb</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>sata</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='model'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio-transitional</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio-non-transitional</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </disk>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <graphics supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='type'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>vnc</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>egl-headless</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>dbus</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </graphics>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <video supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='modelType'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>vga</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>cirrus</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>none</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>bochs</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>ramfb</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </video>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <hostdev supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='mode'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>subsystem</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='startupPolicy'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>default</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>mandatory</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>requisite</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>optional</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='subsysType'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>usb</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>pci</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>scsi</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='capsType'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='pciBackend'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </hostdev>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <rng supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='model'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio-transitional</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio-non-transitional</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='backendModel'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>random</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>egd</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>builtin</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </rng>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <filesystem supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='driverType'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>path</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>handle</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtiofs</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </filesystem>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <tpm supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='model'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>tpm-tis</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>tpm-crb</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='backendModel'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>emulator</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>external</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='backendVersion'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>2.0</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </tpm>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <redirdev supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='bus'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>usb</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </redirdev>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <channel supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='type'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>pty</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>unix</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </channel>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <crypto supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='model'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='type'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>qemu</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='backendModel'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>builtin</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </crypto>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <interface supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='backendType'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>default</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>passt</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </interface>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <panic supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='model'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>isa</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>hyperv</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </panic>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  </devices>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <features>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <gic supported='no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <vmcoreinfo supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <genid supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <backingStoreInput supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <backup supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <async-teardown supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <ps2 supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <sev supported='no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <sgx supported='no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <hyperv supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='features'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>relaxed</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>vapic</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>spinlocks</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>vpindex</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>runtime</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>synic</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>stimer</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>reset</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>vendor_id</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>frequencies</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>reenlightenment</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>tlbflush</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>ipi</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>avic</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>emsr_bitmap</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>xmm_input</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </hyperv>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <launchSecurity supported='no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  </features>
Oct  8 06:04:35 np0005475493 nova_compute[262220]: </domainCapabilities>
Oct  8 06:04:35 np0005475493 nova_compute[262220]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.756 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct  8 06:04:35 np0005475493 nova_compute[262220]: <domainCapabilities>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <path>/usr/libexec/qemu-kvm</path>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <domain>kvm</domain>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <arch>x86_64</arch>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <vcpu max='240'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <iothreads supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <os supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <enum name='firmware'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <loader supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='type'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>rom</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>pflash</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='readonly'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>yes</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>no</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='secure'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>no</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </loader>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  </os>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <cpu>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <mode name='host-passthrough' supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='hostPassthroughMigratable'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>on</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>off</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </mode>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <mode name='maximum' supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='maximumMigratable'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>on</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>off</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </mode>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <mode name='host-model' supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <vendor>AMD</vendor>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='x2apic'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='tsc-deadline'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='hypervisor'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='tsc_adjust'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='spec-ctrl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='stibp'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='arch-capabilities'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='ssbd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='cmp_legacy'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='overflow-recov'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='succor'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='ibrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='amd-ssbd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='virt-ssbd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='lbrv'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='tsc-scale'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='vmcb-clean'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='flushbyasid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='pause-filter'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='pfthreshold'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='svme-addr-chk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='rdctl-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='mds-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='pschange-mc-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='gds-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='require' name='rfds-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <feature policy='disable' name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </mode>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <mode name='custom' supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-noTSX'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Broadwell-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cascadelake-Server-v5'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cooperlake'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cooperlake-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Cooperlake-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Denverton'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mpx'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Denverton-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mpx'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Denverton-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Denverton-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Dhyana-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Genoa'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amd-psfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='auto-ibrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='no-nested-data-bp'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='null-sel-clr-base'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='stibp-always-on'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Genoa-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amd-psfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='auto-ibrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='no-nested-data-bp'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='null-sel-clr-base'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='stibp-always-on'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Milan'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Milan-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Milan-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amd-psfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='no-nested-data-bp'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='null-sel-clr-base'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='stibp-always-on'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Rome'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Rome-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Rome-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-Rome-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='EPYC-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='GraniteRapids'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mcdt-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pbrsb-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='prefetchiti'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='GraniteRapids-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mcdt-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pbrsb-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='prefetchiti'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='GraniteRapids-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx10'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx10-128'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx10-256'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx10-512'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mcdt-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pbrsb-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='prefetchiti'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-noTSX'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Haswell-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-noTSX'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v5'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v6'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Icelake-Server-v7'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='IvyBridge'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='IvyBridge-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='IvyBridge-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='IvyBridge-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='KnightsMill'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-4fmaps'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-4vnniw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512er'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512pf'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='KnightsMill-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-4fmaps'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-4vnniw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512er'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512pf'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Opteron_G4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fma4'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xop'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Opteron_G4-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fma4'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xop'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Opteron_G5'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fma4'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tbm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xop'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Opteron_G5-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fma4'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tbm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xop'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='SapphireRapids'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='SapphireRapids-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='SapphireRapids-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='SapphireRapids-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='amx-tile'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-bf16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-fp16'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512-vpopcntdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bitalg'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vbmi2'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrc'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fzrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='la57'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='taa-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='tsx-ldtrk'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xfd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='SierraForest'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-ne-convert'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cmpccxadd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mcdt-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pbrsb-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='SierraForest-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-ifma'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-ne-convert'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx-vnni-int8'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='bus-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cmpccxadd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fbsdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='fsrs'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ibrs-all'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mcdt-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pbrsb-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='psdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='sbdr-ssdp-no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='serialize'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vaes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='vpclmulqdq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Client-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='hle'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='rtm'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Skylake-Server-v5'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512bw'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512cd'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512dq'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512f'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='avx512vl'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='invpcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pcid'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='pku'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Snowridge'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='core-capability'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mpx'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='split-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Snowridge-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='core-capability'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='mpx'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='split-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Snowridge-v2'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='core-capability'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='split-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Snowridge-v3'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='core-capability'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='split-lock-detect'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='Snowridge-v4'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='cldemote'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='erms'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='gfni'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdir64b'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='movdiri'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='xsaves'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='athlon'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnow'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnowext'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='athlon-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnow'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnowext'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='core2duo'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='core2duo-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='coreduo'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='coreduo-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='n270'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='n270-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='ss'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='phenom'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnow'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnowext'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <blockers model='phenom-v1'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnow'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <feature name='3dnowext'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </blockers>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </mode>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  </cpu>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <memoryBacking supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <enum name='sourceType'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <value>file</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <value>anonymous</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <value>memfd</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  </memoryBacking>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <devices>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <disk supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='diskDevice'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>disk</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>cdrom</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>floppy</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>lun</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='bus'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>ide</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>fdc</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>scsi</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>usb</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>sata</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='model'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio-transitional</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio-non-transitional</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </disk>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <graphics supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='type'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>vnc</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>egl-headless</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>dbus</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </graphics>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <video supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='modelType'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>vga</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>cirrus</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>none</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>bochs</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>ramfb</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </video>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <hostdev supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='mode'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>subsystem</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='startupPolicy'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>default</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>mandatory</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>requisite</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>optional</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='subsysType'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>usb</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>pci</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>scsi</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='capsType'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='pciBackend'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </hostdev>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <rng supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='model'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio-transitional</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtio-non-transitional</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='backendModel'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>random</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>egd</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>builtin</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </rng>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <filesystem supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='driverType'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>path</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>handle</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>virtiofs</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </filesystem>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <tpm supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='model'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>tpm-tis</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>tpm-crb</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='backendModel'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>emulator</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>external</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='backendVersion'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>2.0</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </tpm>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <redirdev supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='bus'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>usb</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </redirdev>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <channel supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='type'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>pty</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>unix</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </channel>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <crypto supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='model'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='type'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>qemu</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='backendModel'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>builtin</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </crypto>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <interface supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='backendType'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>default</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>passt</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </interface>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <panic supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='model'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>isa</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>hyperv</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </panic>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  </devices>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  <features>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <gic supported='no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <vmcoreinfo supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <genid supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <backingStoreInput supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <backup supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <async-teardown supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <ps2 supported='yes'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <sev supported='no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <sgx supported='no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <hyperv supported='yes'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      <enum name='features'>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>relaxed</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>vapic</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>spinlocks</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>vpindex</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>runtime</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>synic</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>stimer</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>reset</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>vendor_id</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>frequencies</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>reenlightenment</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>tlbflush</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>ipi</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>avic</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>emsr_bitmap</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:        <value>xmm_input</value>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:      </enum>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    </hyperv>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:    <launchSecurity supported='no'/>
Oct  8 06:04:35 np0005475493 nova_compute[262220]:  </features>
Oct  8 06:04:35 np0005475493 nova_compute[262220]: </domainCapabilities>
Oct  8 06:04:35 np0005475493 nova_compute[262220]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.820 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.820 2 INFO nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Secure Boot support detected#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.822 2 INFO nova.virt.libvirt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.822 2 INFO nova.virt.libvirt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.832 2 DEBUG nova.virt.libvirt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.870 2 INFO nova.virt.node [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Determined node identity 62e4b021-d3ae-43f9-883d-805e2c7d21a2 from /var/lib/nova/compute_id#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.886 2 WARNING nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Compute nodes ['62e4b021-d3ae-43f9-883d-805e2c7d21a2'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.924 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Oct  8 06:04:35 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v592: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.989 2 WARNING nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.990 2 DEBUG oslo_concurrency.lockutils [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.990 2 DEBUG oslo_concurrency.lockutils [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.990 2 DEBUG oslo_concurrency.lockutils [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.990 2 DEBUG nova.compute.resource_tracker [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:04:35 np0005475493 nova_compute[262220]: 2025-10-08 10:04:35.991 2 DEBUG oslo_concurrency.processutils [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:04:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:04:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:36.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:04:36 np0005475493 systemd[1]: session-55.scope: Deactivated successfully.
Oct  8 06:04:36 np0005475493 systemd[1]: session-55.scope: Consumed 2min 40.010s CPU time.
Oct  8 06:04:36 np0005475493 systemd-logind[798]: Session 55 logged out. Waiting for processes to exit.
Oct  8 06:04:36 np0005475493 systemd-logind[798]: Removed session 55.
Oct  8 06:04:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:04:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1325292707' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:04:36 np0005475493 nova_compute[262220]: 2025-10-08 10:04:36.475 2 DEBUG oslo_concurrency.processutils [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:04:36 np0005475493 systemd[1]: Starting libvirt nodedev daemon...
Oct  8 06:04:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:36.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:36 np0005475493 systemd[1]: Started libvirt nodedev daemon.
Oct  8 06:04:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:36 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:04:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:36 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:04:36 np0005475493 nova_compute[262220]: 2025-10-08 10:04:36.801 2 WARNING nova.virt.libvirt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:04:36 np0005475493 nova_compute[262220]: 2025-10-08 10:04:36.802 2 DEBUG nova.compute.resource_tracker [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4925MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:04:36 np0005475493 nova_compute[262220]: 2025-10-08 10:04:36.802 2 DEBUG oslo_concurrency.lockutils [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:04:36 np0005475493 nova_compute[262220]: 2025-10-08 10:04:36.803 2 DEBUG oslo_concurrency.lockutils [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:04:36 np0005475493 nova_compute[262220]: 2025-10-08 10:04:36.822 2 WARNING nova.compute.resource_tracker [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] No compute node record for compute-0.ctlplane.example.com:62e4b021-d3ae-43f9-883d-805e2c7d21a2: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 62e4b021-d3ae-43f9-883d-805e2c7d21a2 could not be found.#033[00m
Oct  8 06:04:36 np0005475493 nova_compute[262220]: 2025-10-08 10:04:36.856 2 INFO nova.compute.resource_tracker [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 62e4b021-d3ae-43f9-883d-805e2c7d21a2#033[00m
Oct  8 06:04:36 np0005475493 nova_compute[262220]: 2025-10-08 10:04:36.920 2 DEBUG nova.compute.resource_tracker [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:04:36 np0005475493 nova_compute[262220]: 2025-10-08 10:04:36.920 2 DEBUG nova.compute.resource_tracker [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:04:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:04:37.084Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:04:37 np0005475493 nova_compute[262220]: 2025-10-08 10:04:37.754 2 INFO nova.scheduler.client.report [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [req-e54a4d4b-04f5-4d0b-9635-3bda654eb34d] Created resource provider record via placement API for resource provider with UUID 62e4b021-d3ae-43f9-883d-805e2c7d21a2 and name compute-0.ctlplane.example.com.#033[00m
Oct  8 06:04:37 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v593: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Oct  8 06:04:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:04:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:38.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:04:38 np0005475493 nova_compute[262220]: 2025-10-08 10:04:38.168 2 DEBUG oslo_concurrency.processutils [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:04:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:04:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:38.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:04:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:04:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/634133654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:04:38 np0005475493 nova_compute[262220]: 2025-10-08 10:04:38.633 2 DEBUG oslo_concurrency.processutils [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:04:38 np0005475493 nova_compute[262220]: 2025-10-08 10:04:38.638 2 DEBUG nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct  8 06:04:38 np0005475493 nova_compute[262220]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Oct  8 06:04:38 np0005475493 nova_compute[262220]: 2025-10-08 10:04:38.638 2 INFO nova.virt.libvirt.host [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] kernel doesn't support AMD SEV#033[00m
Oct  8 06:04:38 np0005475493 nova_compute[262220]: 2025-10-08 10:04:38.639 2 DEBUG nova.compute.provider_tree [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Updating inventory in ProviderTree for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  8 06:04:38 np0005475493 nova_compute[262220]: 2025-10-08 10:04:38.639 2 DEBUG nova.virt.libvirt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  8 06:04:38 np0005475493 nova_compute[262220]: 2025-10-08 10:04:38.686 2 DEBUG nova.scheduler.client.report [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Updated inventory for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct  8 06:04:38 np0005475493 nova_compute[262220]: 2025-10-08 10:04:38.686 2 DEBUG nova.compute.provider_tree [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Updating resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  8 06:04:38 np0005475493 nova_compute[262220]: 2025-10-08 10:04:38.686 2 DEBUG nova.compute.provider_tree [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Updating inventory in ProviderTree for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  8 06:04:38 np0005475493 nova_compute[262220]: 2025-10-08 10:04:38.774 2 DEBUG nova.compute.provider_tree [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Updating resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  8 06:04:38 np0005475493 nova_compute[262220]: 2025-10-08 10:04:38.799 2 DEBUG nova.compute.resource_tracker [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:04:38 np0005475493 nova_compute[262220]: 2025-10-08 10:04:38.799 2 DEBUG oslo_concurrency.lockutils [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:04:38 np0005475493 nova_compute[262220]: 2025-10-08 10:04:38.799 2 DEBUG nova.service [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Oct  8 06:04:38 np0005475493 nova_compute[262220]: 2025-10-08 10:04:38.916 2 DEBUG nova.service [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Oct  8 06:04:38 np0005475493 nova_compute[262220]: 2025-10-08 10:04:38.916 2 DEBUG nova.servicegroup.drivers.db [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Oct  8 06:04:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:04:39 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v594: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 06:04:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:40.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:40.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:41 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v595: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 06:04:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:04:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:42.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:04:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:42.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct  8 06:04:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  8 06:04:42 np0005475493 podman[262630]: 2025-10-08 10:04:42.907785417 +0000 UTC m=+0.062524287 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  8 06:04:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:43 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:43 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v596: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 06:04:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:04:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:44 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:04:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:44.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:04:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:44.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:44 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:04:45] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Oct  8 06:04:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:04:45] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Oct  8 06:04:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100445 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 06:04:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:45 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:45 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v597: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Oct  8 06:04:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:46 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:46.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:04:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:46.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:04:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:46 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:04:47.085Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:04:47
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'volumes', 'backups', 'default.rgw.log', '.mgr', '.nfs', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'images']
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:04:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:47 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06280016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:04:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v598: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Oct  8 06:04:47 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:04:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:04:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:04:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:04:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:04:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:48 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:48.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:04:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:04:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:04:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:04:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:04:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:04:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:04:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:04:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:04:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:48.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:48 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:04:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:49 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:49 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v599: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Oct  8 06:04:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:50 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06280016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:04:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:50.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:04:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:50.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:50 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:51 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:51 np0005475493 nova_compute[262220]: 2025-10-08 10:04:51.918 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:04:51 np0005475493 nova_compute[262220]: 2025-10-08 10:04:51.944 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:04:51 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v600: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:04:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:52 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:52.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:04:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:52.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:04:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:52 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06280016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:53 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c0091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:04:53 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v601: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:04:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:54 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:54.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:04:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:54.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:04:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:54 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:04:55] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct  8 06:04:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:04:55] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct  8 06:04:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:55 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:55 np0005475493 podman[262692]: 2025-10-08 10:04:55.929559087 +0000 UTC m=+0.090329637 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct  8 06:04:55 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v602: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:04:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:56 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c0091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:04:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:56.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:04:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:04:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:56.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:04:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:56 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:04:57.086Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:04:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:04:57.086Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:04:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:04:57.402 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:04:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:04:57.403 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:04:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:04:57.403 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:04:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:57 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:57 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v603: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:04:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:58 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:04:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:04:58.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:04:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:04:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:04:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:04:58.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:04:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:58 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:04:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:04:59 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:04:59 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v604: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:05:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:00 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:05:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:00.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:05:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:00.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:00 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:01 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:01 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v605: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:05:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:02 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:05:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:02.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:05:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:02.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:02 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:05:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:05:02 np0005475493 podman[262726]: 2025-10-08 10:05:02.897815306 +0000 UTC m=+0.053916956 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  8 06:05:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:03 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:05:03 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v606: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct  8 06:05:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:04 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:05:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:04.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:05:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:05:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:04.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:05:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:04 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:04 np0005475493 podman[262749]: 2025-10-08 10:05:04.893087161 +0000 UTC m=+0.054590580 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  8 06:05:05 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  8 06:05:05 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/398418135' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  8 06:05:05 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  8 06:05:05 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/398418135' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  8 06:05:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:05:05] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct  8 06:05:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:05:05] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct  8 06:05:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:05 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:05 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v607: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:05:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:06 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:06.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:06 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  8 06:05:06 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1345408182' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  8 06:05:06 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  8 06:05:06 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1345408182' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  8 06:05:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:06.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:06 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:05:07.087Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:05:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:07 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:07 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v608: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:05:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:08 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:05:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:08.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:05:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:08.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:08 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:05:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:09 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:09 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v609: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:05:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:10 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:05:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:10.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:05:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:10.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:10 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:11 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:11 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v610: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:05:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:12 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:12.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:12.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:12 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:13 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:13 np0005475493 podman[262805]: 2025-10-08 10:05:13.904576843 +0000 UTC m=+0.057935060 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  8 06:05:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:05:13 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v611: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct  8 06:05:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:14 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f062c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:14.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:14.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:14 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:05:15] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct  8 06:05:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:05:15] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct  8 06:05:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:15 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:15 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v612: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:05:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:16 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:05:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:16.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:05:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:16.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:16 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:05:17.088Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:05:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:05:17.088Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:05:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:05:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:05:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:17 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:05:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:05:17 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v613: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:05:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:18 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:18.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:05:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:05:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:05:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:05:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:18.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:18 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100518 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 06:05:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:05:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 06:05:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:19 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:05:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 06:05:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:05:19 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v614: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:05:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:20 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:20.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:05:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:05:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:05:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:05:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:05:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:20.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:05:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:05:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:20 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:05:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:05:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:05:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:05:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:05:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:05:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:05:20 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:05:20 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:05:20 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:05:20 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:05:20 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:05:20 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:05:21 np0005475493 podman[263012]: 2025-10-08 10:05:21.257346943 +0000 UTC m=+0.045183810 container create 29530d826fa666ff5873b16e1e87c8e8b2d6e3c9b3df93c5122f2ff3d4998402 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ramanujan, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:05:21 np0005475493 systemd[1]: Started libpod-conmon-29530d826fa666ff5873b16e1e87c8e8b2d6e3c9b3df93c5122f2ff3d4998402.scope.
Oct  8 06:05:21 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:05:21 np0005475493 podman[263012]: 2025-10-08 10:05:21.236545716 +0000 UTC m=+0.024382623 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:05:21 np0005475493 podman[263012]: 2025-10-08 10:05:21.375540175 +0000 UTC m=+0.163377072 container init 29530d826fa666ff5873b16e1e87c8e8b2d6e3c9b3df93c5122f2ff3d4998402 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ramanujan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:05:21 np0005475493 podman[263012]: 2025-10-08 10:05:21.38379426 +0000 UTC m=+0.171631137 container start 29530d826fa666ff5873b16e1e87c8e8b2d6e3c9b3df93c5122f2ff3d4998402 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  8 06:05:21 np0005475493 interesting_ramanujan[263028]: 167 167
Oct  8 06:05:21 np0005475493 systemd[1]: libpod-29530d826fa666ff5873b16e1e87c8e8b2d6e3c9b3df93c5122f2ff3d4998402.scope: Deactivated successfully.
Oct  8 06:05:21 np0005475493 conmon[263028]: conmon 29530d826fa666ff5873 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-29530d826fa666ff5873b16e1e87c8e8b2d6e3c9b3df93c5122f2ff3d4998402.scope/container/memory.events
Oct  8 06:05:21 np0005475493 podman[263012]: 2025-10-08 10:05:21.417877034 +0000 UTC m=+0.205713921 container attach 29530d826fa666ff5873b16e1e87c8e8b2d6e3c9b3df93c5122f2ff3d4998402 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ramanujan, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:05:21 np0005475493 podman[263012]: 2025-10-08 10:05:21.419483815 +0000 UTC m=+0.207320692 container died 29530d826fa666ff5873b16e1e87c8e8b2d6e3c9b3df93c5122f2ff3d4998402 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:05:21 np0005475493 systemd[1]: var-lib-containers-storage-overlay-027552be38688e8e7cb393c5f547fe7d9044305a9f0d9fb2820f6e65bc1f93b5-merged.mount: Deactivated successfully.
Oct  8 06:05:21 np0005475493 podman[263012]: 2025-10-08 10:05:21.467564797 +0000 UTC m=+0.255401674 container remove 29530d826fa666ff5873b16e1e87c8e8b2d6e3c9b3df93c5122f2ff3d4998402 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct  8 06:05:21 np0005475493 systemd[1]: libpod-conmon-29530d826fa666ff5873b16e1e87c8e8b2d6e3c9b3df93c5122f2ff3d4998402.scope: Deactivated successfully.
Oct  8 06:05:21 np0005475493 podman[263052]: 2025-10-08 10:05:21.641975313 +0000 UTC m=+0.040350176 container create 45bb1fe3f9bc62bd937220b2b3c236294f36ba58bb54867bf1b9c0a064e89a5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_mayer, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  8 06:05:21 np0005475493 systemd[1]: Started libpod-conmon-45bb1fe3f9bc62bd937220b2b3c236294f36ba58bb54867bf1b9c0a064e89a5f.scope.
Oct  8 06:05:21 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:05:21 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ad66571a94c4e710a694a4d52277b0100e7609c22b81d9b134c013ce159a6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:05:21 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ad66571a94c4e710a694a4d52277b0100e7609c22b81d9b134c013ce159a6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:05:21 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ad66571a94c4e710a694a4d52277b0100e7609c22b81d9b134c013ce159a6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:05:21 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ad66571a94c4e710a694a4d52277b0100e7609c22b81d9b134c013ce159a6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:05:21 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ad66571a94c4e710a694a4d52277b0100e7609c22b81d9b134c013ce159a6d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:05:21 np0005475493 podman[263052]: 2025-10-08 10:05:21.62505425 +0000 UTC m=+0.023429143 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:05:21 np0005475493 podman[263052]: 2025-10-08 10:05:21.722761095 +0000 UTC m=+0.121135988 container init 45bb1fe3f9bc62bd937220b2b3c236294f36ba58bb54867bf1b9c0a064e89a5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_mayer, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct  8 06:05:21 np0005475493 podman[263052]: 2025-10-08 10:05:21.729233803 +0000 UTC m=+0.127608686 container start 45bb1fe3f9bc62bd937220b2b3c236294f36ba58bb54867bf1b9c0a064e89a5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:05:21 np0005475493 podman[263052]: 2025-10-08 10:05:21.733574942 +0000 UTC m=+0.131949815 container attach 45bb1fe3f9bc62bd937220b2b3c236294f36ba58bb54867bf1b9c0a064e89a5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Oct  8 06:05:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:21 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:21 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v615: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:05:22 np0005475493 romantic_mayer[263069]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:05:22 np0005475493 romantic_mayer[263069]: --> All data devices are unavailable
Oct  8 06:05:22 np0005475493 systemd[1]: libpod-45bb1fe3f9bc62bd937220b2b3c236294f36ba58bb54867bf1b9c0a064e89a5f.scope: Deactivated successfully.
Oct  8 06:05:22 np0005475493 podman[263052]: 2025-10-08 10:05:22.056698318 +0000 UTC m=+0.455073201 container died 45bb1fe3f9bc62bd937220b2b3c236294f36ba58bb54867bf1b9c0a064e89a5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_mayer, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct  8 06:05:22 np0005475493 systemd[1]: var-lib-containers-storage-overlay-49ad66571a94c4e710a694a4d52277b0100e7609c22b81d9b134c013ce159a6d-merged.mount: Deactivated successfully.
Oct  8 06:05:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:22 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:22 np0005475493 podman[263052]: 2025-10-08 10:05:22.10291132 +0000 UTC m=+0.501286184 container remove 45bb1fe3f9bc62bd937220b2b3c236294f36ba58bb54867bf1b9c0a064e89a5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  8 06:05:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:22.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:22 np0005475493 systemd[1]: libpod-conmon-45bb1fe3f9bc62bd937220b2b3c236294f36ba58bb54867bf1b9c0a064e89a5f.scope: Deactivated successfully.
Oct  8 06:05:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:22.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:22 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:22 np0005475493 podman[263187]: 2025-10-08 10:05:22.698579841 +0000 UTC m=+0.041380649 container create e435fcbdb68719d60482abe71720b037aebdf35556dd40a926cb0955cc369a1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0)
Oct  8 06:05:22 np0005475493 systemd[1]: Started libpod-conmon-e435fcbdb68719d60482abe71720b037aebdf35556dd40a926cb0955cc369a1b.scope.
Oct  8 06:05:22 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:05:22 np0005475493 podman[263187]: 2025-10-08 10:05:22.679825379 +0000 UTC m=+0.022626217 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:05:22 np0005475493 podman[263187]: 2025-10-08 10:05:22.786484351 +0000 UTC m=+0.129285179 container init e435fcbdb68719d60482abe71720b037aebdf35556dd40a926cb0955cc369a1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  8 06:05:22 np0005475493 podman[263187]: 2025-10-08 10:05:22.794174167 +0000 UTC m=+0.136974975 container start e435fcbdb68719d60482abe71720b037aebdf35556dd40a926cb0955cc369a1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  8 06:05:22 np0005475493 practical_pare[263203]: 167 167
Oct  8 06:05:22 np0005475493 systemd[1]: libpod-e435fcbdb68719d60482abe71720b037aebdf35556dd40a926cb0955cc369a1b.scope: Deactivated successfully.
Oct  8 06:05:22 np0005475493 conmon[263203]: conmon e435fcbdb68719d60482 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e435fcbdb68719d60482abe71720b037aebdf35556dd40a926cb0955cc369a1b.scope/container/memory.events
Oct  8 06:05:22 np0005475493 podman[263187]: 2025-10-08 10:05:22.828823499 +0000 UTC m=+0.171624307 container attach e435fcbdb68719d60482abe71720b037aebdf35556dd40a926cb0955cc369a1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pare, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  8 06:05:22 np0005475493 podman[263187]: 2025-10-08 10:05:22.829531232 +0000 UTC m=+0.172332040 container died e435fcbdb68719d60482abe71720b037aebdf35556dd40a926cb0955cc369a1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pare, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  8 06:05:22 np0005475493 systemd[1]: var-lib-containers-storage-overlay-d3f0e4b9107df44b384e6d24ab0c8a7ee0ac3671e202a808b9cd1ee4fe674b52-merged.mount: Deactivated successfully.
Oct  8 06:05:23 np0005475493 podman[263187]: 2025-10-08 10:05:23.051992639 +0000 UTC m=+0.394793447 container remove e435fcbdb68719d60482abe71720b037aebdf35556dd40a926cb0955cc369a1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:05:23 np0005475493 systemd[1]: libpod-conmon-e435fcbdb68719d60482abe71720b037aebdf35556dd40a926cb0955cc369a1b.scope: Deactivated successfully.
Oct  8 06:05:23 np0005475493 podman[263228]: 2025-10-08 10:05:23.289811139 +0000 UTC m=+0.083316654 container create 6b90c314c3fa81888e67f21de36cb5514f26d9943dcc1ebcfe69bab1c76018e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  8 06:05:23 np0005475493 podman[263228]: 2025-10-08 10:05:23.227923583 +0000 UTC m=+0.021429108 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:05:23 np0005475493 systemd[1]: Started libpod-conmon-6b90c314c3fa81888e67f21de36cb5514f26d9943dcc1ebcfe69bab1c76018e0.scope.
Oct  8 06:05:23 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:05:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c78adcf053e0248b209e9933261764a7cf694a3ef6c1e2d960dc5691b534a8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:05:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c78adcf053e0248b209e9933261764a7cf694a3ef6c1e2d960dc5691b534a8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:05:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c78adcf053e0248b209e9933261764a7cf694a3ef6c1e2d960dc5691b534a8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:05:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c78adcf053e0248b209e9933261764a7cf694a3ef6c1e2d960dc5691b534a8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:05:23 np0005475493 podman[263228]: 2025-10-08 10:05:23.369525807 +0000 UTC m=+0.163031342 container init 6b90c314c3fa81888e67f21de36cb5514f26d9943dcc1ebcfe69bab1c76018e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_babbage, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  8 06:05:23 np0005475493 podman[263228]: 2025-10-08 10:05:23.377584385 +0000 UTC m=+0.171089900 container start 6b90c314c3fa81888e67f21de36cb5514f26d9943dcc1ebcfe69bab1c76018e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_babbage, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  8 06:05:23 np0005475493 podman[263228]: 2025-10-08 10:05:23.381746359 +0000 UTC m=+0.175251904 container attach 6b90c314c3fa81888e67f21de36cb5514f26d9943dcc1ebcfe69bab1c76018e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_babbage, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]: {
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:    "1": [
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:        {
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:            "devices": [
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:                "/dev/loop3"
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:            ],
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:            "lv_name": "ceph_lv0",
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:            "lv_size": "21470642176",
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:            "name": "ceph_lv0",
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:            "tags": {
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:                "ceph.cluster_name": "ceph",
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:                "ceph.crush_device_class": "",
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:                "ceph.encrypted": "0",
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:                "ceph.osd_id": "1",
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:                "ceph.type": "block",
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:                "ceph.vdo": "0",
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:                "ceph.with_tpm": "0"
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:            },
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:            "type": "block",
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:            "vg_name": "ceph_vg0"
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:        }
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]:    ]
Oct  8 06:05:23 np0005475493 eloquent_babbage[263244]: }
Oct  8 06:05:23 np0005475493 systemd[1]: libpod-6b90c314c3fa81888e67f21de36cb5514f26d9943dcc1ebcfe69bab1c76018e0.scope: Deactivated successfully.
Oct  8 06:05:23 np0005475493 podman[263228]: 2025-10-08 10:05:23.688548821 +0000 UTC m=+0.482054356 container died 6b90c314c3fa81888e67f21de36cb5514f26d9943dcc1ebcfe69bab1c76018e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct  8 06:05:23 np0005475493 systemd[1]: var-lib-containers-storage-overlay-6c78adcf053e0248b209e9933261764a7cf694a3ef6c1e2d960dc5691b534a8c-merged.mount: Deactivated successfully.
Oct  8 06:05:23 np0005475493 podman[263228]: 2025-10-08 10:05:23.816601819 +0000 UTC m=+0.610107334 container remove 6b90c314c3fa81888e67f21de36cb5514f26d9943dcc1ebcfe69bab1c76018e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:05:23 np0005475493 systemd[1]: libpod-conmon-6b90c314c3fa81888e67f21de36cb5514f26d9943dcc1ebcfe69bab1c76018e0.scope: Deactivated successfully.
Oct  8 06:05:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:23 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:05:23 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v616: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct  8 06:05:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:24 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:24.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:24 np0005475493 podman[263360]: 2025-10-08 10:05:24.38952668 +0000 UTC m=+0.038360612 container create 898a24fb116e2f39205a990e5aae3f5d1b6616e40b665eb507c0df15a9e8bd0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mccarthy, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:05:24 np0005475493 systemd[1]: Started libpod-conmon-898a24fb116e2f39205a990e5aae3f5d1b6616e40b665eb507c0df15a9e8bd0e.scope.
Oct  8 06:05:24 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:05:24 np0005475493 podman[263360]: 2025-10-08 10:05:24.375640485 +0000 UTC m=+0.024474437 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:05:24 np0005475493 podman[263360]: 2025-10-08 10:05:24.474645651 +0000 UTC m=+0.123479593 container init 898a24fb116e2f39205a990e5aae3f5d1b6616e40b665eb507c0df15a9e8bd0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mccarthy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  8 06:05:24 np0005475493 podman[263360]: 2025-10-08 10:05:24.480419436 +0000 UTC m=+0.129253368 container start 898a24fb116e2f39205a990e5aae3f5d1b6616e40b665eb507c0df15a9e8bd0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mccarthy, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  8 06:05:24 np0005475493 heuristic_mccarthy[263377]: 167 167
Oct  8 06:05:24 np0005475493 systemd[1]: libpod-898a24fb116e2f39205a990e5aae3f5d1b6616e40b665eb507c0df15a9e8bd0e.scope: Deactivated successfully.
Oct  8 06:05:24 np0005475493 podman[263360]: 2025-10-08 10:05:24.485641453 +0000 UTC m=+0.134475405 container attach 898a24fb116e2f39205a990e5aae3f5d1b6616e40b665eb507c0df15a9e8bd0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mccarthy, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:05:24 np0005475493 podman[263360]: 2025-10-08 10:05:24.48615103 +0000 UTC m=+0.134984952 container died 898a24fb116e2f39205a990e5aae3f5d1b6616e40b665eb507c0df15a9e8bd0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:05:24 np0005475493 systemd[1]: var-lib-containers-storage-overlay-3e78c3a02a23b2d14d420dda26df807ec1b7fce999a2de93408a5b79c66c5ced-merged.mount: Deactivated successfully.
Oct  8 06:05:24 np0005475493 podman[263360]: 2025-10-08 10:05:24.560370342 +0000 UTC m=+0.209204264 container remove 898a24fb116e2f39205a990e5aae3f5d1b6616e40b665eb507c0df15a9e8bd0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mccarthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  8 06:05:24 np0005475493 systemd[1]: libpod-conmon-898a24fb116e2f39205a990e5aae3f5d1b6616e40b665eb507c0df15a9e8bd0e.scope: Deactivated successfully.
Oct  8 06:05:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:24.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:24 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:24 np0005475493 podman[263401]: 2025-10-08 10:05:24.731294304 +0000 UTC m=+0.042802664 container create 1f6046a2bc1c69551b5156a6f2222e4ecb6803a7528f658e1457777a7f32c610 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lamport, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:05:24 np0005475493 systemd[1]: Started libpod-conmon-1f6046a2bc1c69551b5156a6f2222e4ecb6803a7528f658e1457777a7f32c610.scope.
Oct  8 06:05:24 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:05:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d59a5a6116a1f1546008538956a5d0f0548576878256d41a15d83db7d59a59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:05:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d59a5a6116a1f1546008538956a5d0f0548576878256d41a15d83db7d59a59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:05:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d59a5a6116a1f1546008538956a5d0f0548576878256d41a15d83db7d59a59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:05:24 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d59a5a6116a1f1546008538956a5d0f0548576878256d41a15d83db7d59a59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:05:24 np0005475493 podman[263401]: 2025-10-08 10:05:24.807191919 +0000 UTC m=+0.118700289 container init 1f6046a2bc1c69551b5156a6f2222e4ecb6803a7528f658e1457777a7f32c610 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:05:24 np0005475493 podman[263401]: 2025-10-08 10:05:24.714820406 +0000 UTC m=+0.026328786 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:05:24 np0005475493 podman[263401]: 2025-10-08 10:05:24.817660056 +0000 UTC m=+0.129168416 container start 1f6046a2bc1c69551b5156a6f2222e4ecb6803a7528f658e1457777a7f32c610 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:05:24 np0005475493 podman[263401]: 2025-10-08 10:05:24.822680797 +0000 UTC m=+0.134189177 container attach 1f6046a2bc1c69551b5156a6f2222e4ecb6803a7528f658e1457777a7f32c610 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:05:25 np0005475493 lvm[263518]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:05:25 np0005475493 lvm[263518]: VG ceph_vg0 finished
Oct  8 06:05:25 np0005475493 sweet_lamport[263417]: {}
Oct  8 06:05:25 np0005475493 systemd[1]: libpod-1f6046a2bc1c69551b5156a6f2222e4ecb6803a7528f658e1457777a7f32c610.scope: Deactivated successfully.
Oct  8 06:05:25 np0005475493 podman[263401]: 2025-10-08 10:05:25.538659816 +0000 UTC m=+0.850168176 container died 1f6046a2bc1c69551b5156a6f2222e4ecb6803a7528f658e1457777a7f32c610 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lamport, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:05:25 np0005475493 systemd[1]: libpod-1f6046a2bc1c69551b5156a6f2222e4ecb6803a7528f658e1457777a7f32c610.scope: Consumed 1.142s CPU time.
Oct  8 06:05:25 np0005475493 systemd[1]: var-lib-containers-storage-overlay-b7d59a5a6116a1f1546008538956a5d0f0548576878256d41a15d83db7d59a59-merged.mount: Deactivated successfully.
Oct  8 06:05:25 np0005475493 podman[263401]: 2025-10-08 10:05:25.581130999 +0000 UTC m=+0.892639359 container remove 1f6046a2bc1c69551b5156a6f2222e4ecb6803a7528f658e1457777a7f32c610 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lamport, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  8 06:05:25 np0005475493 systemd[1]: libpod-conmon-1f6046a2bc1c69551b5156a6f2222e4ecb6803a7528f658e1457777a7f32c610.scope: Deactivated successfully.
Oct  8 06:05:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:05:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:05:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:05:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:05:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:05:25] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:05:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:05:25] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:05:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:25 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:25 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v617: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 06:05:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:26 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:26.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:05:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:26.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:05:26 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:05:26 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:05:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:26 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:26 np0005475493 podman[263559]: 2025-10-08 10:05:26.989923406 +0000 UTC m=+0.139589440 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  8 06:05:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:05:27.090Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:05:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:27 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:27 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v618: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 06:05:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:28 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:05:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:28.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:05:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:28 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:05:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:28.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:28 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:05:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:29 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:29 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v619: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct  8 06:05:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:30 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:30.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:05:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:30.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:05:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:30 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:31 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:05:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:31 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:05:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:31 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:05:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:31 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:31 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v620: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct  8 06:05:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:32 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:05:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:32.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:05:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:32.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:32 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:05:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:05:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:33 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06380037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:33 np0005475493 podman[263593]: 2025-10-08 10:05:33.894850468 +0000 UTC m=+0.052269898 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct  8 06:05:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:05:33 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v621: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 06:05:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:34 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:34.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:34 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 06:05:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:05:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:34.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:05:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:34 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:34 np0005475493 nova_compute[262220]: 2025-10-08 10:05:34.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:05:34 np0005475493 nova_compute[262220]: 2025-10-08 10:05:34.888 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:05:34 np0005475493 nova_compute[262220]: 2025-10-08 10:05:34.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  8 06:05:34 np0005475493 nova_compute[262220]: 2025-10-08 10:05:34.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  8 06:05:34 np0005475493 nova_compute[262220]: 2025-10-08 10:05:34.907 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  8 06:05:34 np0005475493 nova_compute[262220]: 2025-10-08 10:05:34.907 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:05:34 np0005475493 nova_compute[262220]: 2025-10-08 10:05:34.907 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:05:34 np0005475493 nova_compute[262220]: 2025-10-08 10:05:34.907 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:05:34 np0005475493 nova_compute[262220]: 2025-10-08 10:05:34.908 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:05:34 np0005475493 nova_compute[262220]: 2025-10-08 10:05:34.908 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:05:34 np0005475493 nova_compute[262220]: 2025-10-08 10:05:34.908 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:05:34 np0005475493 nova_compute[262220]: 2025-10-08 10:05:34.908 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  8 06:05:34 np0005475493 nova_compute[262220]: 2025-10-08 10:05:34.908 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:05:34 np0005475493 nova_compute[262220]: 2025-10-08 10:05:34.930 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:05:34 np0005475493 nova_compute[262220]: 2025-10-08 10:05:34.931 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:05:34 np0005475493 nova_compute[262220]: 2025-10-08 10:05:34.933 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:05:34 np0005475493 nova_compute[262220]: 2025-10-08 10:05:34.933 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:05:34 np0005475493 nova_compute[262220]: 2025-10-08 10:05:34.933 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:05:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:05:35 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3837090167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:05:35 np0005475493 nova_compute[262220]: 2025-10-08 10:05:35.411 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:05:35 np0005475493 nova_compute[262220]: 2025-10-08 10:05:35.580 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:05:35 np0005475493 nova_compute[262220]: 2025-10-08 10:05:35.581 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4892MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:05:35 np0005475493 nova_compute[262220]: 2025-10-08 10:05:35.581 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:05:35 np0005475493 nova_compute[262220]: 2025-10-08 10:05:35.581 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:05:35 np0005475493 nova_compute[262220]: 2025-10-08 10:05:35.697 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:05:35 np0005475493 nova_compute[262220]: 2025-10-08 10:05:35.697 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:05:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:05:35] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:05:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:05:35] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:05:35 np0005475493 nova_compute[262220]: 2025-10-08 10:05:35.740 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:05:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:35 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:35 np0005475493 podman[263638]: 2025-10-08 10:05:35.895920606 +0000 UTC m=+0.056118731 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  8 06:05:35 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v622: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 06:05:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:36 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:36.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:05:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2055616685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:05:36 np0005475493 nova_compute[262220]: 2025-10-08 10:05:36.248 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:05:36 np0005475493 nova_compute[262220]: 2025-10-08 10:05:36.253 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:05:36 np0005475493 nova_compute[262220]: 2025-10-08 10:05:36.279 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:05:36 np0005475493 nova_compute[262220]: 2025-10-08 10:05:36.280 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:05:36 np0005475493 nova_compute[262220]: 2025-10-08 10:05:36.281 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:05:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:36.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:36 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:05:37.090Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:05:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:05:37.091Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:05:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:05:37.091Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:05:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:37 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:37 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v623: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 06:05:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:38 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:05:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:38.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:05:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:38.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:38 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:05:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:39 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06200016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:39 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v624: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 06:05:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:40 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06100016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:40.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:40.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:40 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100540 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 06:05:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:41 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:41 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v625: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct  8 06:05:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06200016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:42.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:42.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06100016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:43 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:05:43 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v626: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct  8 06:05:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:44 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:44.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:44.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:44 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06200016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:44 np0005475493 podman[263689]: 2025-10-08 10:05:44.905215401 +0000 UTC m=+0.061809504 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  8 06:05:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:05:45] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:05:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:05:45] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:05:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:45 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06100016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:45 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v627: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:05:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:46 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:46.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:05:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:46.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:05:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:46 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:05:47.092Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:05:47
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['images', '.nfs', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'volumes', '.mgr']
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:05:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:05:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:05:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:47 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:05:47 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v628: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:05:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:05:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:05:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:05:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:05:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:05:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:48 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:48.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:05:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:05:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:05:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:05:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:05:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:05:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:05:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:05:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:05:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:48.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:48 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:05:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:49 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:49 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v629: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:05:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:50 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:50.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:50.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:50 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:51 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:51 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v630: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:05:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:52 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:52.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:52.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:52 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:53 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:05:53 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v631: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 0 B/s wr, 12 op/s
Oct  8 06:05:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:54 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:05:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:54.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:05:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:54.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:54 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:05:55] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:05:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:05:55] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Oct  8 06:05:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:55 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:55 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v632: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 0 B/s wr, 12 op/s
Oct  8 06:05:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:56 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:56.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:56.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:56 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:05:57.093Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:05:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:05:57.093Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:05:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:05:57.093Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:05:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:05:57.404 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:05:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:05:57.404 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:05:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:05:57.404 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:05:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:57 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:57 np0005475493 podman[263749]: 2025-10-08 10:05:57.925577639 +0000 UTC m=+0.078096947 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  8 06:05:57 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v633: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 0 B/s wr, 12 op/s
Oct  8 06:05:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:58 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:05:58.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:05:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:05:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:05:58.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:05:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:58 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:05:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:05:59 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:05:59 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v634: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 0 B/s wr, 161 op/s
Oct  8 06:06:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:00 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:00.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:00.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:00 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:01 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.24538 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct  8 06:06:01 np0005475493 ceph-mgr[73869]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  8 06:06:01 np0005475493 ceph-mgr[73869]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  8 06:06:01 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.24544 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct  8 06:06:01 np0005475493 ceph-mgr[73869]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  8 06:06:01 np0005475493 ceph-mgr[73869]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  8 06:06:01 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.24538 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Oct  8 06:06:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:01 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:01 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v635: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 0 B/s wr, 161 op/s
Oct  8 06:06:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:02 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:06:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:02.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:06:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:02.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:02 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:06:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:06:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:03 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:06:03 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v636: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 0 B/s wr, 161 op/s
Oct  8 06:06:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:04 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:04.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:06:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:04.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:06:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:04 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:04 np0005475493 podman[263783]: 2025-10-08 10:06:04.895965014 +0000 UTC m=+0.051077710 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct  8 06:06:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:06:05] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct  8 06:06:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:06:05] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct  8 06:06:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:05 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:05 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v637: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 0 B/s wr, 149 op/s
Oct  8 06:06:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:06 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:06.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:06.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:06 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:06 np0005475493 podman[263829]: 2025-10-08 10:06:06.888973083 +0000 UTC m=+0.051283606 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Oct  8 06:06:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:06:07.094Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:06:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:06:07.095Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:06:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:07 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:07 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v638: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 0 B/s wr, 149 op/s
Oct  8 06:06:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:08 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:08.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:06:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:08.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:06:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:08 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:06:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:09 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:09 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v639: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 0 B/s wr, 149 op/s
Oct  8 06:06:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:10 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:10.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:10.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:10 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:11 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:11 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v640: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:06:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:12 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:12.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:12.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:12 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:13 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638001c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:06:13 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v641: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:06:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:14 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:14.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:14.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:14 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:06:15] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct  8 06:06:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:06:15] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct  8 06:06:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:15 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:15 np0005475493 podman[263860]: 2025-10-08 10:06:15.893459253 +0000 UTC m=+0.058306581 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct  8 06:06:15 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v642: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:06:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:16 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638001c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:16.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:16.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:16 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:06:17.095Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:06:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:06:17.096Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:06:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:06:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:06:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:17 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:06:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:06:17 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v643: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:06:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:18 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:06:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:06:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:06:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:06:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:18.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:06:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:18.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:06:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:18 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638002940 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:06:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:19 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:19 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v644: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct  8 06:06:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:20 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:20.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:20.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:20 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:21 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:21 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v645: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:06:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:22 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:22.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:22.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:22 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Oct  8 06:06:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3992189617' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct  8 06:06:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Oct  8 06:06:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/893782256' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct  8 06:06:23 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.15084 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct  8 06:06:23 np0005475493 ceph-mgr[73869]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  8 06:06:23 np0005475493 ceph-mgr[73869]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  8 06:06:23 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.15081 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct  8 06:06:23 np0005475493 ceph-mgr[73869]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  8 06:06:23 np0005475493 ceph-mgr[73869]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  8 06:06:23 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.15081 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Oct  8 06:06:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:23 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:06:23 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v646: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:06:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:24 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638002940 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:06:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:24.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:06:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:24.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:24 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:06:25] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Oct  8 06:06:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:06:25] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Oct  8 06:06:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:25 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:25 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v647: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:06:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:26 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:26.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:06:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:06:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:06:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:06:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:06:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:26.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:06:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:26 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:06:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:06:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:06:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:06:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:06:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:06:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:06:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:06:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:06:27.096Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:06:27 np0005475493 podman[264087]: 2025-10-08 10:06:27.275153982 +0000 UTC m=+0.040138149 container create 417aa23a52f0a0770553d03a17077772ce6462c4daaf872be5e4b7fa43aa03de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_gagarin, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:06:27 np0005475493 systemd[1]: Started libpod-conmon-417aa23a52f0a0770553d03a17077772ce6462c4daaf872be5e4b7fa43aa03de.scope.
Oct  8 06:06:27 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:06:27 np0005475493 podman[264087]: 2025-10-08 10:06:27.350497399 +0000 UTC m=+0.115481596 container init 417aa23a52f0a0770553d03a17077772ce6462c4daaf872be5e4b7fa43aa03de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_gagarin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  8 06:06:27 np0005475493 podman[264087]: 2025-10-08 10:06:27.258166067 +0000 UTC m=+0.023150254 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:06:27 np0005475493 podman[264087]: 2025-10-08 10:06:27.359300561 +0000 UTC m=+0.124284728 container start 417aa23a52f0a0770553d03a17077772ce6462c4daaf872be5e4b7fa43aa03de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:06:27 np0005475493 podman[264087]: 2025-10-08 10:06:27.362775823 +0000 UTC m=+0.127759990 container attach 417aa23a52f0a0770553d03a17077772ce6462c4daaf872be5e4b7fa43aa03de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_gagarin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  8 06:06:27 np0005475493 epic_gagarin[264103]: 167 167
Oct  8 06:06:27 np0005475493 systemd[1]: libpod-417aa23a52f0a0770553d03a17077772ce6462c4daaf872be5e4b7fa43aa03de.scope: Deactivated successfully.
Oct  8 06:06:27 np0005475493 podman[264087]: 2025-10-08 10:06:27.366105729 +0000 UTC m=+0.131089916 container died 417aa23a52f0a0770553d03a17077772ce6462c4daaf872be5e4b7fa43aa03de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_gagarin, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  8 06:06:27 np0005475493 systemd[1]: var-lib-containers-storage-overlay-19336887525d6be1e4060a1397bae3f6408a2701bdba3a3798676f8767cb7f7d-merged.mount: Deactivated successfully.
Oct  8 06:06:27 np0005475493 podman[264087]: 2025-10-08 10:06:27.407943242 +0000 UTC m=+0.172927409 container remove 417aa23a52f0a0770553d03a17077772ce6462c4daaf872be5e4b7fa43aa03de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_gagarin, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  8 06:06:27 np0005475493 systemd[1]: libpod-conmon-417aa23a52f0a0770553d03a17077772ce6462c4daaf872be5e4b7fa43aa03de.scope: Deactivated successfully.
Oct  8 06:06:27 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:06:27 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:06:27 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:06:27 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:06:27 np0005475493 podman[264127]: 2025-10-08 10:06:27.5715201 +0000 UTC m=+0.047448444 container create 581267be4aa99fa175d2d69a0996b7688f07a22e7a627fb8735ba876e8767c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_einstein, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  8 06:06:27 np0005475493 systemd[1]: Started libpod-conmon-581267be4aa99fa175d2d69a0996b7688f07a22e7a627fb8735ba876e8767c62.scope.
Oct  8 06:06:27 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:06:27 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18327363a634110020b784116267b183437d4a703e77063c3a60898aac668c7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:06:27 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18327363a634110020b784116267b183437d4a703e77063c3a60898aac668c7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:06:27 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18327363a634110020b784116267b183437d4a703e77063c3a60898aac668c7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:06:27 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18327363a634110020b784116267b183437d4a703e77063c3a60898aac668c7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:06:27 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18327363a634110020b784116267b183437d4a703e77063c3a60898aac668c7c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:06:27 np0005475493 podman[264127]: 2025-10-08 10:06:27.546872559 +0000 UTC m=+0.022800923 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:06:27 np0005475493 podman[264127]: 2025-10-08 10:06:27.683360088 +0000 UTC m=+0.159288432 container init 581267be4aa99fa175d2d69a0996b7688f07a22e7a627fb8735ba876e8767c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_einstein, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:06:27 np0005475493 podman[264127]: 2025-10-08 10:06:27.68996759 +0000 UTC m=+0.165895934 container start 581267be4aa99fa175d2d69a0996b7688f07a22e7a627fb8735ba876e8767c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  8 06:06:27 np0005475493 podman[264127]: 2025-10-08 10:06:27.695006941 +0000 UTC m=+0.170935335 container attach 581267be4aa99fa175d2d69a0996b7688f07a22e7a627fb8735ba876e8767c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct  8 06:06:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:27 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:27 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v648: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:06:28 np0005475493 unruffled_einstein[264143]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:06:28 np0005475493 unruffled_einstein[264143]: --> All data devices are unavailable
Oct  8 06:06:28 np0005475493 systemd[1]: libpod-581267be4aa99fa175d2d69a0996b7688f07a22e7a627fb8735ba876e8767c62.scope: Deactivated successfully.
Oct  8 06:06:28 np0005475493 podman[264127]: 2025-10-08 10:06:28.039822734 +0000 UTC m=+0.515751078 container died 581267be4aa99fa175d2d69a0996b7688f07a22e7a627fb8735ba876e8767c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_einstein, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:06:28 np0005475493 systemd[1]: var-lib-containers-storage-overlay-18327363a634110020b784116267b183437d4a703e77063c3a60898aac668c7c-merged.mount: Deactivated successfully.
Oct  8 06:06:28 np0005475493 podman[264127]: 2025-10-08 10:06:28.127262329 +0000 UTC m=+0.603190673 container remove 581267be4aa99fa175d2d69a0996b7688f07a22e7a627fb8735ba876e8767c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_einstein, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct  8 06:06:28 np0005475493 systemd[1]: libpod-conmon-581267be4aa99fa175d2d69a0996b7688f07a22e7a627fb8735ba876e8767c62.scope: Deactivated successfully.
Oct  8 06:06:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:28 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:28.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:28 np0005475493 podman[264159]: 2025-10-08 10:06:28.201505761 +0000 UTC m=+0.134980452 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  8 06:06:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:06:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:28.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:06:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:28 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:28 np0005475493 podman[264287]: 2025-10-08 10:06:28.755316118 +0000 UTC m=+0.061661299 container create efe574fa69aceea763b4c01bcc33620c82828ebc9f3b423a35d369a46e207628 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:06:28 np0005475493 systemd[1]: Started libpod-conmon-efe574fa69aceea763b4c01bcc33620c82828ebc9f3b423a35d369a46e207628.scope.
Oct  8 06:06:28 np0005475493 podman[264287]: 2025-10-08 10:06:28.720795191 +0000 UTC m=+0.027140382 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:06:28 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:06:28 np0005475493 podman[264287]: 2025-10-08 10:06:28.879557984 +0000 UTC m=+0.185903175 container init efe574fa69aceea763b4c01bcc33620c82828ebc9f3b423a35d369a46e207628 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_fermi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:06:28 np0005475493 podman[264287]: 2025-10-08 10:06:28.887279032 +0000 UTC m=+0.193624193 container start efe574fa69aceea763b4c01bcc33620c82828ebc9f3b423a35d369a46e207628 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct  8 06:06:28 np0005475493 recursing_fermi[264302]: 167 167
Oct  8 06:06:28 np0005475493 systemd[1]: libpod-efe574fa69aceea763b4c01bcc33620c82828ebc9f3b423a35d369a46e207628.scope: Deactivated successfully.
Oct  8 06:06:28 np0005475493 podman[264287]: 2025-10-08 10:06:28.928499104 +0000 UTC m=+0.234844265 container attach efe574fa69aceea763b4c01bcc33620c82828ebc9f3b423a35d369a46e207628 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_fermi, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  8 06:06:28 np0005475493 podman[264287]: 2025-10-08 10:06:28.92898506 +0000 UTC m=+0.235330221 container died efe574fa69aceea763b4c01bcc33620c82828ebc9f3b423a35d369a46e207628 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:06:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:06:29 np0005475493 systemd[1]: var-lib-containers-storage-overlay-c5cd42e7e32a274b4d17ef5e4d76f6f4c1808a20708453209bd88a1635254051-merged.mount: Deactivated successfully.
Oct  8 06:06:29 np0005475493 podman[264287]: 2025-10-08 10:06:29.082635859 +0000 UTC m=+0.388981020 container remove efe574fa69aceea763b4c01bcc33620c82828ebc9f3b423a35d369a46e207628 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_fermi, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct  8 06:06:29 np0005475493 systemd[1]: libpod-conmon-efe574fa69aceea763b4c01bcc33620c82828ebc9f3b423a35d369a46e207628.scope: Deactivated successfully.
Oct  8 06:06:29 np0005475493 podman[264328]: 2025-10-08 10:06:29.219382627 +0000 UTC m=+0.022717440 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:06:29 np0005475493 podman[264328]: 2025-10-08 10:06:29.374275005 +0000 UTC m=+0.177609768 container create bf92be93432c516bb3b6ba8ab1740c7307af394536aa060bbaf36d8140726315 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  8 06:06:29 np0005475493 systemd[1]: Started libpod-conmon-bf92be93432c516bb3b6ba8ab1740c7307af394536aa060bbaf36d8140726315.scope.
Oct  8 06:06:29 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:06:29 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0215fd2bda372d4972d92d0a42c3001195cb38f80a57bd3552e8b8eda85ffd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:06:29 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0215fd2bda372d4972d92d0a42c3001195cb38f80a57bd3552e8b8eda85ffd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:06:29 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0215fd2bda372d4972d92d0a42c3001195cb38f80a57bd3552e8b8eda85ffd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:06:29 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0215fd2bda372d4972d92d0a42c3001195cb38f80a57bd3552e8b8eda85ffd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:06:29 np0005475493 podman[264328]: 2025-10-08 10:06:29.497422746 +0000 UTC m=+0.300757539 container init bf92be93432c516bb3b6ba8ab1740c7307af394536aa060bbaf36d8140726315 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_kepler, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  8 06:06:29 np0005475493 podman[264328]: 2025-10-08 10:06:29.505168985 +0000 UTC m=+0.308503748 container start bf92be93432c516bb3b6ba8ab1740c7307af394536aa060bbaf36d8140726315 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_kepler, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Oct  8 06:06:29 np0005475493 podman[264328]: 2025-10-08 10:06:29.551364697 +0000 UTC m=+0.354699500 container attach bf92be93432c516bb3b6ba8ab1740c7307af394536aa060bbaf36d8140726315 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_kepler, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]: {
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:    "1": [
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:        {
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:            "devices": [
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:                "/dev/loop3"
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:            ],
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:            "lv_name": "ceph_lv0",
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:            "lv_size": "21470642176",
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:            "name": "ceph_lv0",
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:            "tags": {
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:                "ceph.cluster_name": "ceph",
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:                "ceph.crush_device_class": "",
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:                "ceph.encrypted": "0",
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:                "ceph.osd_id": "1",
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:                "ceph.type": "block",
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:                "ceph.vdo": "0",
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:                "ceph.with_tpm": "0"
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:            },
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:            "type": "block",
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:            "vg_name": "ceph_vg0"
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:        }
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]:    ]
Oct  8 06:06:29 np0005475493 quizzical_kepler[264344]: }
Oct  8 06:06:29 np0005475493 systemd[1]: libpod-bf92be93432c516bb3b6ba8ab1740c7307af394536aa060bbaf36d8140726315.scope: Deactivated successfully.
Oct  8 06:06:29 np0005475493 podman[264328]: 2025-10-08 10:06:29.788377471 +0000 UTC m=+0.591712234 container died bf92be93432c516bb3b6ba8ab1740c7307af394536aa060bbaf36d8140726315 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_kepler, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:06:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:29 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:29 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v649: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct  8 06:06:30 np0005475493 systemd[1]: var-lib-containers-storage-overlay-f0215fd2bda372d4972d92d0a42c3001195cb38f80a57bd3552e8b8eda85ffd6-merged.mount: Deactivated successfully.
Oct  8 06:06:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:30 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:30.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:30 np0005475493 podman[264328]: 2025-10-08 10:06:30.192537147 +0000 UTC m=+0.995871920 container remove bf92be93432c516bb3b6ba8ab1740c7307af394536aa060bbaf36d8140726315 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_kepler, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  8 06:06:30 np0005475493 systemd[1]: libpod-conmon-bf92be93432c516bb3b6ba8ab1740c7307af394536aa060bbaf36d8140726315.scope: Deactivated successfully.
Oct  8 06:06:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:30.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:30 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:30 np0005475493 podman[264456]: 2025-10-08 10:06:30.806289958 +0000 UTC m=+0.101974673 container create c00267f071e00aa655d53cb087e0a9cafe2d6786a647bd2ebc92e1e8ec8d54cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:06:30 np0005475493 podman[264456]: 2025-10-08 10:06:30.771174551 +0000 UTC m=+0.066859296 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:06:30 np0005475493 systemd[1]: Started libpod-conmon-c00267f071e00aa655d53cb087e0a9cafe2d6786a647bd2ebc92e1e8ec8d54cb.scope.
Oct  8 06:06:30 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:06:30 np0005475493 podman[264456]: 2025-10-08 10:06:30.967566941 +0000 UTC m=+0.263251696 container init c00267f071e00aa655d53cb087e0a9cafe2d6786a647bd2ebc92e1e8ec8d54cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:06:30 np0005475493 podman[264456]: 2025-10-08 10:06:30.976835909 +0000 UTC m=+0.272520634 container start c00267f071e00aa655d53cb087e0a9cafe2d6786a647bd2ebc92e1e8ec8d54cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_wu, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:06:30 np0005475493 strange_wu[264472]: 167 167
Oct  8 06:06:30 np0005475493 systemd[1]: libpod-c00267f071e00aa655d53cb087e0a9cafe2d6786a647bd2ebc92e1e8ec8d54cb.scope: Deactivated successfully.
Oct  8 06:06:31 np0005475493 podman[264456]: 2025-10-08 10:06:31.081907099 +0000 UTC m=+0.377592004 container attach c00267f071e00aa655d53cb087e0a9cafe2d6786a647bd2ebc92e1e8ec8d54cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_wu, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:06:31 np0005475493 podman[264456]: 2025-10-08 10:06:31.082523669 +0000 UTC m=+0.378208424 container died c00267f071e00aa655d53cb087e0a9cafe2d6786a647bd2ebc92e1e8ec8d54cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct  8 06:06:31 np0005475493 systemd[1]: var-lib-containers-storage-overlay-ba8cd11d1ae1d668a21246b39952e8380d16769782fcc3133dd6d150ba6afeeb-merged.mount: Deactivated successfully.
Oct  8 06:06:31 np0005475493 podman[264456]: 2025-10-08 10:06:31.344186835 +0000 UTC m=+0.639871560 container remove c00267f071e00aa655d53cb087e0a9cafe2d6786a647bd2ebc92e1e8ec8d54cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_wu, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  8 06:06:31 np0005475493 systemd[1]: libpod-conmon-c00267f071e00aa655d53cb087e0a9cafe2d6786a647bd2ebc92e1e8ec8d54cb.scope: Deactivated successfully.
Oct  8 06:06:31 np0005475493 podman[264497]: 2025-10-08 10:06:31.553647794 +0000 UTC m=+0.061207864 container create 7bcf36ec01925ca31a7669205114d5a332992df57acd69a6c2d3f608d5b94b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  8 06:06:31 np0005475493 podman[264497]: 2025-10-08 10:06:31.519142257 +0000 UTC m=+0.026702347 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:06:31 np0005475493 systemd[1]: Started libpod-conmon-7bcf36ec01925ca31a7669205114d5a332992df57acd69a6c2d3f608d5b94b57.scope.
Oct  8 06:06:31 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:06:31 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b791556e06a6048b8fa52894f58c4c642b05c140e7b370517321f25d71f85698/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:06:31 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b791556e06a6048b8fa52894f58c4c642b05c140e7b370517321f25d71f85698/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:06:31 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b791556e06a6048b8fa52894f58c4c642b05c140e7b370517321f25d71f85698/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:06:31 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b791556e06a6048b8fa52894f58c4c642b05c140e7b370517321f25d71f85698/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:06:31 np0005475493 podman[264497]: 2025-10-08 10:06:31.759771797 +0000 UTC m=+0.267331887 container init 7bcf36ec01925ca31a7669205114d5a332992df57acd69a6c2d3f608d5b94b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_williams, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct  8 06:06:31 np0005475493 podman[264497]: 2025-10-08 10:06:31.767130932 +0000 UTC m=+0.274691002 container start 7bcf36ec01925ca31a7669205114d5a332992df57acd69a6c2d3f608d5b94b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_williams, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:06:31 np0005475493 podman[264497]: 2025-10-08 10:06:31.875522211 +0000 UTC m=+0.383082331 container attach 7bcf36ec01925ca31a7669205114d5a332992df57acd69a6c2d3f608d5b94b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_williams, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:06:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:31 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:31 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v650: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:06:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:32 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:32.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:32 np0005475493 lvm[264589]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:06:32 np0005475493 lvm[264589]: VG ceph_vg0 finished
Oct  8 06:06:32 np0005475493 nifty_williams[264514]: {}
Oct  8 06:06:32 np0005475493 systemd[1]: libpod-7bcf36ec01925ca31a7669205114d5a332992df57acd69a6c2d3f608d5b94b57.scope: Deactivated successfully.
Oct  8 06:06:32 np0005475493 systemd[1]: libpod-7bcf36ec01925ca31a7669205114d5a332992df57acd69a6c2d3f608d5b94b57.scope: Consumed 1.077s CPU time.
Oct  8 06:06:32 np0005475493 conmon[264514]: conmon 7bcf36ec01925ca31a76 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7bcf36ec01925ca31a7669205114d5a332992df57acd69a6c2d3f608d5b94b57.scope/container/memory.events
Oct  8 06:06:32 np0005475493 podman[264497]: 2025-10-08 10:06:32.47866254 +0000 UTC m=+0.986222610 container died 7bcf36ec01925ca31a7669205114d5a332992df57acd69a6c2d3f608d5b94b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_williams, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  8 06:06:32 np0005475493 systemd[1]: var-lib-containers-storage-overlay-b791556e06a6048b8fa52894f58c4c642b05c140e7b370517321f25d71f85698-merged.mount: Deactivated successfully.
Oct  8 06:06:32 np0005475493 podman[264497]: 2025-10-08 10:06:32.530195604 +0000 UTC m=+1.037755684 container remove 7bcf36ec01925ca31a7669205114d5a332992df57acd69a6c2d3f608d5b94b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_williams, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:06:32 np0005475493 systemd[1]: libpod-conmon-7bcf36ec01925ca31a7669205114d5a332992df57acd69a6c2d3f608d5b94b57.scope: Deactivated successfully.
Oct  8 06:06:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:06:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:06:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:06:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:06:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:32.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:32 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:06:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:06:33 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:06:33 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:06:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:33 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:06:33 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v651: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:06:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:34 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:34.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:34.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:34 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:06:35] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 06:06:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:06:35] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 06:06:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:35 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:35 np0005475493 podman[264634]: 2025-10-08 10:06:35.95414687 +0000 UTC m=+0.105335201 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent)
Oct  8 06:06:35 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v652: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:06:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:36 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f064c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:06:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:36.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:06:36 np0005475493 nova_compute[262220]: 2025-10-08 10:06:36.274 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:06:36 np0005475493 nova_compute[262220]: 2025-10-08 10:06:36.275 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:06:36 np0005475493 nova_compute[262220]: 2025-10-08 10:06:36.294 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:06:36 np0005475493 nova_compute[262220]: 2025-10-08 10:06:36.295 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  8 06:06:36 np0005475493 nova_compute[262220]: 2025-10-08 10:06:36.295 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  8 06:06:36 np0005475493 nova_compute[262220]: 2025-10-08 10:06:36.308 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  8 06:06:36 np0005475493 nova_compute[262220]: 2025-10-08 10:06:36.308 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:06:36 np0005475493 nova_compute[262220]: 2025-10-08 10:06:36.309 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:06:36 np0005475493 nova_compute[262220]: 2025-10-08 10:06:36.309 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:06:36 np0005475493 nova_compute[262220]: 2025-10-08 10:06:36.309 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:06:36 np0005475493 nova_compute[262220]: 2025-10-08 10:06:36.309 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  8 06:06:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:06:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:36.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:06:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:36 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:36 np0005475493 nova_compute[262220]: 2025-10-08 10:06:36.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:06:36 np0005475493 nova_compute[262220]: 2025-10-08 10:06:36.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:06:36 np0005475493 nova_compute[262220]: 2025-10-08 10:06:36.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:06:36 np0005475493 nova_compute[262220]: 2025-10-08 10:06:36.913 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:06:36 np0005475493 nova_compute[262220]: 2025-10-08 10:06:36.913 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:06:36 np0005475493 nova_compute[262220]: 2025-10-08 10:06:36.914 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:06:36 np0005475493 nova_compute[262220]: 2025-10-08 10:06:36.914 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:06:36 np0005475493 nova_compute[262220]: 2025-10-08 10:06:36.914 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:06:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:06:37.097Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:06:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:06:37 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4207116258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:06:37 np0005475493 nova_compute[262220]: 2025-10-08 10:06:37.376 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:06:37 np0005475493 nova_compute[262220]: 2025-10-08 10:06:37.595 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:06:37 np0005475493 nova_compute[262220]: 2025-10-08 10:06:37.597 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4898MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:06:37 np0005475493 nova_compute[262220]: 2025-10-08 10:06:37.597 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:06:37 np0005475493 nova_compute[262220]: 2025-10-08 10:06:37.597 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:06:37 np0005475493 nova_compute[262220]: 2025-10-08 10:06:37.691 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:06:37 np0005475493 nova_compute[262220]: 2025-10-08 10:06:37.691 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:06:37 np0005475493 nova_compute[262220]: 2025-10-08 10:06:37.721 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:06:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:37 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:37 np0005475493 podman[264680]: 2025-10-08 10:06:37.903945663 +0000 UTC m=+0.065094749 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:06:37 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v653: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:06:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:06:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1440575470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:06:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:38 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f061c003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:38 np0005475493 nova_compute[262220]: 2025-10-08 10:06:38.173 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:06:38 np0005475493 nova_compute[262220]: 2025-10-08 10:06:38.180 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:06:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:38.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:38 np0005475493 nova_compute[262220]: 2025-10-08 10:06:38.202 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:06:38 np0005475493 nova_compute[262220]: 2025-10-08 10:06:38.203 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:06:38 np0005475493 nova_compute[262220]: 2025-10-08 10:06:38.204 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:06:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:06:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:38.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:06:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:38 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:06:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:39 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v654: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct  8 06:06:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:40 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:40.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:40.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:40 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:41 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v655: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:06:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:42.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:42.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:42 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100643 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 06:06:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:43 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:06:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v656: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:06:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:44 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:06:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:44.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:06:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:06:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:44.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:06:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:44 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:06:45] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 06:06:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:06:45] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 06:06:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:45 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v657: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:06:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:46 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:06:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:46.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:06:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:06:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:46 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:46.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:06:46 np0005475493 podman[264756]: 2025-10-08 10:06:46.810623176 +0000 UTC m=+0.042241966 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  8 06:06:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:06:47.098Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:06:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:06:47.099Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:06:47
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['volumes', 'vms', '.rgw.root', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', '.nfs', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'default.rgw.control', '.mgr']
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:06:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:06:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:06:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:47 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:06:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:06:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v658: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:06:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:06:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:06:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:06:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:06:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:06:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:06:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:06:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:06:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:06:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:48 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:48.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:06:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:06:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:06:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:06:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:06:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:48 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:48.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:06:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:49 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:50 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v659: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct  8 06:06:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:50 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:50.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:50 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:50.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:51 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0610002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:52 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v660: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Oct  8 06:06:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:52 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:52.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:52 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:52.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100653 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 06:06:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:53 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:06:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:53 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:06:54 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v661: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Oct  8 06:06:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:54 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06100036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:54.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:54 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:54.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:06:55] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct  8 06:06:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:06:55] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Oct  8 06:06:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:55 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:56 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v662: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Oct  8 06:06:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:56 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:06:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:56 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:06:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:56 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:06:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:56.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:06:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:56 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06100036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:56.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:06:57.099Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:06:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:06:57.405 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:06:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:06:57.405 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:06:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:06:57.405 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:06:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:57 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:58 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v663: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Oct  8 06:06:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:58 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:06:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:06:58.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:06:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:58 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:06:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:06:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:06:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:06:58.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:06:58 np0005475493 podman[264789]: 2025-10-08 10:06:58.938199751 +0000 UTC m=+0.098743419 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  8 06:06:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:06:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:59 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:06:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:59 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:06:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:06:59 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06100036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:00 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v664: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct  8 06:07:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:00 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:00.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:00 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:00.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:01 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:02 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v665: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct  8 06:07:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:02 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 06:07:02 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:07:02.119 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  8 06:07:02 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:07:02.120 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  8 06:07:02 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:07:02.120 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:07:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:02 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06100036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:02.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:02 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:02.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:07:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:07:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:03 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:07:04 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v666: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 06:07:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:04 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:04.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:04 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f06100036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:04.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:05 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:07:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:07:05] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct  8 06:07:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:07:05] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct  8 06:07:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:05 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0620003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:06 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v667: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Oct  8 06:07:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:06 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0628003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:06.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[262084]: 08/10/2025 10:07:06 : epoch 68e6372d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0638003650 fd 39 proxy ignored for local
Oct  8 06:07:06 np0005475493 kernel: ganesha.nfsd[263851]: segfault at 50 ip 00007f06f568532e sp 00007f06a97f9210 error 4 in libntirpc.so.5.8[7f06f566a000+2c000] likely on CPU 1 (core 0, socket 1)
Oct  8 06:07:06 np0005475493 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct  8 06:07:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:06.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:06 np0005475493 systemd[1]: Started Process Core Dump (PID 264848/UID 0).
Oct  8 06:07:06 np0005475493 podman[264849]: 2025-10-08 10:07:06.852157286 +0000 UTC m=+0.051842294 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  8 06:07:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:07:07.100Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:07:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100707 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 06:07:08 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v668: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Oct  8 06:07:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:08.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:08 np0005475493 systemd-coredump[264850]: Process 262090 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 61:#012#0  0x00007f06f568532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct  8 06:07:08 np0005475493 systemd[1]: systemd-coredump@9-264848-0.service: Deactivated successfully.
Oct  8 06:07:08 np0005475493 systemd[1]: systemd-coredump@9-264848-0.service: Consumed 1.632s CPU time.
Oct  8 06:07:08 np0005475493 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  8 06:07:08 np0005475493 podman[264878]: 2025-10-08 10:07:08.548073494 +0000 UTC m=+0.026449659 container died dcd28dc3b591a8ad1bbef3775b31bab43e62da06b22c6c50b9245ad61c1024bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:07:08 np0005475493 systemd[1]: var-lib-containers-storage-overlay-0368887a430f991d02246d619e7304973cce2d2c741718f4bff3761663df78c0-merged.mount: Deactivated successfully.
Oct  8 06:07:08 np0005475493 podman[264878]: 2025-10-08 10:07:08.664857212 +0000 UTC m=+0.143233357 container remove dcd28dc3b591a8ad1bbef3775b31bab43e62da06b22c6c50b9245ad61c1024bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  8 06:07:08 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Main process exited, code=exited, status=139/n/a
Oct  8 06:07:08 np0005475493 podman[264876]: 2025-10-08 10:07:08.688945264 +0000 UTC m=+0.165652196 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  8 06:07:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:08.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:08 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Failed with result 'exit-code'.
Oct  8 06:07:08 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.589s CPU time.
Oct  8 06:07:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:07:10 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v669: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Oct  8 06:07:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:07:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:10.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:07:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:10.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v670: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Oct  8 06:07:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:12.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:12.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100713 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 06:07:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:07:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v671: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Oct  8 06:07:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.002000065s ======
Oct  8 06:07:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:14.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000065s
Oct  8 06:07:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:14.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100715 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 06:07:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:07:15] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct  8 06:07:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:07:15] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct  8 06:07:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v672: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Oct  8 06:07:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:16.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:07:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:16.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:07:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:07:17.100Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:07:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:07:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:07:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:07:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:07:17 np0005475493 podman[264949]: 2025-10-08 10:07:17.929206719 +0000 UTC m=+0.092700114 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid)
Oct  8 06:07:18 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v673: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Oct  8 06:07:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:07:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:07:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:07:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:07:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:18.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:07:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:18.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:07:18 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Scheduled restart job, restart counter is at 10.
Oct  8 06:07:18 np0005475493 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 06:07:18 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 1.589s CPU time.
Oct  8 06:07:18 np0005475493 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 06:07:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:07:19 np0005475493 podman[265021]: 2025-10-08 10:07:19.110784725 +0000 UTC m=+0.102394665 container create ff96eb6387ce0570d856673e2f9d2de072dc140ca0ed9b744a95fbf6029b655c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct  8 06:07:19 np0005475493 podman[265021]: 2025-10-08 10:07:19.032839375 +0000 UTC m=+0.024449315 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:07:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7680890908c887a4af3f6279a54cd446656bf9035c0c45bf7374d576d707e16e/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  8 06:07:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7680890908c887a4af3f6279a54cd446656bf9035c0c45bf7374d576d707e16e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:07:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7680890908c887a4af3f6279a54cd446656bf9035c0c45bf7374d576d707e16e/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:07:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7680890908c887a4af3f6279a54cd446656bf9035c0c45bf7374d576d707e16e/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.uynkmx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:07:19 np0005475493 podman[265021]: 2025-10-08 10:07:19.236369395 +0000 UTC m=+0.227979415 container init ff96eb6387ce0570d856673e2f9d2de072dc140ca0ed9b744a95fbf6029b655c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  8 06:07:19 np0005475493 podman[265021]: 2025-10-08 10:07:19.246382216 +0000 UTC m=+0.237992186 container start ff96eb6387ce0570d856673e2f9d2de072dc140ca0ed9b744a95fbf6029b655c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  8 06:07:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  8 06:07:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  8 06:07:19 np0005475493 bash[265021]: ff96eb6387ce0570d856673e2f9d2de072dc140ca0ed9b744a95fbf6029b655c
Oct  8 06:07:19 np0005475493 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 06:07:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  8 06:07:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  8 06:07:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  8 06:07:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  8 06:07:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  8 06:07:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:07:20 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v674: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Oct  8 06:07:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:20.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:20.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:22 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v675: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct  8 06:07:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:22.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:22.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:07:24 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v676: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct  8 06:07:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:07:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:24.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:07:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:24.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:25 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:07:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:25 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:07:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:07:25] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Oct  8 06:07:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:07:25] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Oct  8 06:07:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v677: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct  8 06:07:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:26.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:07:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:26.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:07:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:07:27.102Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:07:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:07:27.103Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:07:28 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v678: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct  8 06:07:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:28.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:28.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:28 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:07:29 np0005475493 podman[265114]: 2025-10-08 10:07:29.909151196 +0000 UTC m=+0.075127038 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:07:30 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v679: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 06:07:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:07:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:30.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:07:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:07:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:30.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  8 06:07:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v680: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 06:07:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900016e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:32.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:07:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:32.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:07:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:07:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:07:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:07:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:07:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:07:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:07:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:07:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:07:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:07:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:07:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:07:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:07:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:07:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:07:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:07:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:07:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100733 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 06:07:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:33 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:07:34 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v681: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 06:07:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0001bd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:34.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:34 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:07:34 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:07:34 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:07:34 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:07:34 np0005475493 podman[265334]: 2025-10-08 10:07:34.367241043 +0000 UTC m=+0.026277680 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:07:34 np0005475493 podman[265334]: 2025-10-08 10:07:34.479026395 +0000 UTC m=+0.138063022 container create c65e5a024aaa432837b521bea2563e37d77b303e67f7058c4d628a5b37556ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_joliot, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  8 06:07:34 np0005475493 systemd[1]: Started libpod-conmon-c65e5a024aaa432837b521bea2563e37d77b303e67f7058c4d628a5b37556ee8.scope.
Oct  8 06:07:34 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:07:34 np0005475493 podman[265334]: 2025-10-08 10:07:34.614540124 +0000 UTC m=+0.273576761 container init c65e5a024aaa432837b521bea2563e37d77b303e67f7058c4d628a5b37556ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:07:34 np0005475493 podman[265334]: 2025-10-08 10:07:34.625979912 +0000 UTC m=+0.285016529 container start c65e5a024aaa432837b521bea2563e37d77b303e67f7058c4d628a5b37556ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  8 06:07:34 np0005475493 friendly_joliot[265349]: 167 167
Oct  8 06:07:34 np0005475493 systemd[1]: libpod-c65e5a024aaa432837b521bea2563e37d77b303e67f7058c4d628a5b37556ee8.scope: Deactivated successfully.
Oct  8 06:07:34 np0005475493 conmon[265349]: conmon c65e5a024aaa432837b5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c65e5a024aaa432837b521bea2563e37d77b303e67f7058c4d628a5b37556ee8.scope/container/memory.events
Oct  8 06:07:34 np0005475493 podman[265334]: 2025-10-08 10:07:34.654114401 +0000 UTC m=+0.313151018 container attach c65e5a024aaa432837b521bea2563e37d77b303e67f7058c4d628a5b37556ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_joliot, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  8 06:07:34 np0005475493 podman[265334]: 2025-10-08 10:07:34.655645521 +0000 UTC m=+0.314682148 container died c65e5a024aaa432837b521bea2563e37d77b303e67f7058c4d628a5b37556ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:07:34 np0005475493 systemd[1]: var-lib-containers-storage-overlay-fd9399fce7d523dbeaeb4aefac73d5ea7434c88b68447df453b9ca977789a98a-merged.mount: Deactivated successfully.
Oct  8 06:07:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90002000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:34.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:34 np0005475493 podman[265334]: 2025-10-08 10:07:34.867456204 +0000 UTC m=+0.526492831 container remove c65e5a024aaa432837b521bea2563e37d77b303e67f7058c4d628a5b37556ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_joliot, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct  8 06:07:34 np0005475493 systemd[1]: libpod-conmon-c65e5a024aaa432837b521bea2563e37d77b303e67f7058c4d628a5b37556ee8.scope: Deactivated successfully.
Oct  8 06:07:35 np0005475493 podman[265376]: 2025-10-08 10:07:35.058371593 +0000 UTC m=+0.057679754 container create e4cbc5a780e69ce45ddce1e00f1ebfd62c1d7d00200fff535d04e61a0a957691 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:07:35 np0005475493 systemd[1]: Started libpod-conmon-e4cbc5a780e69ce45ddce1e00f1ebfd62c1d7d00200fff535d04e61a0a957691.scope.
Oct  8 06:07:35 np0005475493 podman[265376]: 2025-10-08 10:07:35.029346635 +0000 UTC m=+0.028654816 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:07:35 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:07:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a2bb37fdb3045c3371e409313df2bdae9c91a36f0023d630acf79240d06e2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:07:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a2bb37fdb3045c3371e409313df2bdae9c91a36f0023d630acf79240d06e2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:07:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a2bb37fdb3045c3371e409313df2bdae9c91a36f0023d630acf79240d06e2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:07:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a2bb37fdb3045c3371e409313df2bdae9c91a36f0023d630acf79240d06e2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:07:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a2bb37fdb3045c3371e409313df2bdae9c91a36f0023d630acf79240d06e2d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:07:35 np0005475493 podman[265376]: 2025-10-08 10:07:35.197812898 +0000 UTC m=+0.197121079 container init e4cbc5a780e69ce45ddce1e00f1ebfd62c1d7d00200fff535d04e61a0a957691 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_blackburn, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  8 06:07:35 np0005475493 podman[265376]: 2025-10-08 10:07:35.205843628 +0000 UTC m=+0.205151789 container start e4cbc5a780e69ce45ddce1e00f1ebfd62c1d7d00200fff535d04e61a0a957691 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_blackburn, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  8 06:07:35 np0005475493 podman[265376]: 2025-10-08 10:07:35.213372981 +0000 UTC m=+0.212681162 container attach e4cbc5a780e69ce45ddce1e00f1ebfd62c1d7d00200fff535d04e61a0a957691 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_blackburn, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  8 06:07:35 np0005475493 vibrant_blackburn[265394]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:07:35 np0005475493 vibrant_blackburn[265394]: --> All data devices are unavailable
Oct  8 06:07:35 np0005475493 systemd[1]: libpod-e4cbc5a780e69ce45ddce1e00f1ebfd62c1d7d00200fff535d04e61a0a957691.scope: Deactivated successfully.
Oct  8 06:07:35 np0005475493 podman[265376]: 2025-10-08 10:07:35.546621168 +0000 UTC m=+0.545929329 container died e4cbc5a780e69ce45ddce1e00f1ebfd62c1d7d00200fff535d04e61a0a957691 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  8 06:07:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:07:35] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct  8 06:07:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:07:35] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct  8 06:07:35 np0005475493 systemd[1]: var-lib-containers-storage-overlay-74a2bb37fdb3045c3371e409313df2bdae9c91a36f0023d630acf79240d06e2d-merged.mount: Deactivated successfully.
Oct  8 06:07:35 np0005475493 podman[265376]: 2025-10-08 10:07:35.888934128 +0000 UTC m=+0.888242279 container remove e4cbc5a780e69ce45ddce1e00f1ebfd62c1d7d00200fff535d04e61a0a957691 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  8 06:07:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:35 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:35 np0005475493 systemd[1]: libpod-conmon-e4cbc5a780e69ce45ddce1e00f1ebfd62c1d7d00200fff535d04e61a0a957691.scope: Deactivated successfully.
Oct  8 06:07:36 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v682: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct  8 06:07:36 np0005475493 nova_compute[262220]: 2025-10-08 10:07:36.204 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:07:36 np0005475493 nova_compute[262220]: 2025-10-08 10:07:36.205 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:07:36 np0005475493 nova_compute[262220]: 2025-10-08 10:07:36.205 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:07:36 np0005475493 nova_compute[262220]: 2025-10-08 10:07:36.205 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  8 06:07:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:36.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:36 np0005475493 podman[265516]: 2025-10-08 10:07:36.526240319 +0000 UTC m=+0.072450812 container create 404943f1a74397422b25126dc66d2d558fd4f3d853ef3fa574bb2a7132dd0532 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:07:36 np0005475493 systemd[1]: Started libpod-conmon-404943f1a74397422b25126dc66d2d558fd4f3d853ef3fa574bb2a7132dd0532.scope.
Oct  8 06:07:36 np0005475493 podman[265516]: 2025-10-08 10:07:36.479000153 +0000 UTC m=+0.025210666 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:07:36 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:07:36 np0005475493 podman[265516]: 2025-10-08 10:07:36.636986997 +0000 UTC m=+0.183197520 container init 404943f1a74397422b25126dc66d2d558fd4f3d853ef3fa574bb2a7132dd0532 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_goldwasser, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct  8 06:07:36 np0005475493 podman[265516]: 2025-10-08 10:07:36.64513371 +0000 UTC m=+0.191344203 container start 404943f1a74397422b25126dc66d2d558fd4f3d853ef3fa574bb2a7132dd0532 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_goldwasser, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  8 06:07:36 np0005475493 goofy_goldwasser[265532]: 167 167
Oct  8 06:07:36 np0005475493 systemd[1]: libpod-404943f1a74397422b25126dc66d2d558fd4f3d853ef3fa574bb2a7132dd0532.scope: Deactivated successfully.
Oct  8 06:07:36 np0005475493 conmon[265532]: conmon 404943f1a74397422b25 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-404943f1a74397422b25126dc66d2d558fd4f3d853ef3fa574bb2a7132dd0532.scope/container/memory.events
Oct  8 06:07:36 np0005475493 podman[265516]: 2025-10-08 10:07:36.68197841 +0000 UTC m=+0.228188903 container attach 404943f1a74397422b25126dc66d2d558fd4f3d853ef3fa574bb2a7132dd0532 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_goldwasser, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:07:36 np0005475493 podman[265516]: 2025-10-08 10:07:36.683292523 +0000 UTC m=+0.229503016 container died 404943f1a74397422b25126dc66d2d558fd4f3d853ef3fa574bb2a7132dd0532 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct  8 06:07:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0001bd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:36 np0005475493 systemd[1]: var-lib-containers-storage-overlay-fdcab4249034ca836bca7e06fedad241ca385d97e076e7de38e6dfeb1e4b2afe-merged.mount: Deactivated successfully.
Oct  8 06:07:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:36.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:36 np0005475493 podman[265516]: 2025-10-08 10:07:36.849354058 +0000 UTC m=+0.395564551 container remove 404943f1a74397422b25126dc66d2d558fd4f3d853ef3fa574bb2a7132dd0532 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  8 06:07:36 np0005475493 systemd[1]: libpod-conmon-404943f1a74397422b25126dc66d2d558fd4f3d853ef3fa574bb2a7132dd0532.scope: Deactivated successfully.
Oct  8 06:07:36 np0005475493 nova_compute[262220]: 2025-10-08 10:07:36.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:07:36 np0005475493 nova_compute[262220]: 2025-10-08 10:07:36.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  8 06:07:36 np0005475493 nova_compute[262220]: 2025-10-08 10:07:36.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  8 06:07:36 np0005475493 nova_compute[262220]: 2025-10-08 10:07:36.902 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  8 06:07:36 np0005475493 nova_compute[262220]: 2025-10-08 10:07:36.902 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:07:36 np0005475493 nova_compute[262220]: 2025-10-08 10:07:36.902 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:07:37 np0005475493 podman[265558]: 2025-10-08 10:07:37.014952318 +0000 UTC m=+0.049022404 container create a8df51854524af450bb4829d3e46e302f86c4e7852dfa124348c2be573c265c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_feynman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  8 06:07:37 np0005475493 systemd[1]: Started libpod-conmon-a8df51854524af450bb4829d3e46e302f86c4e7852dfa124348c2be573c265c8.scope.
Oct  8 06:07:37 np0005475493 podman[265558]: 2025-10-08 10:07:36.988811333 +0000 UTC m=+0.022881439 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:07:37 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:07:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df45afdd3038e5dc7ff1b803e53e79a4a98569a06fb6999202194521d791796b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:07:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df45afdd3038e5dc7ff1b803e53e79a4a98569a06fb6999202194521d791796b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:07:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df45afdd3038e5dc7ff1b803e53e79a4a98569a06fb6999202194521d791796b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:07:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df45afdd3038e5dc7ff1b803e53e79a4a98569a06fb6999202194521d791796b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:07:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:07:37.104Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:07:37 np0005475493 podman[265558]: 2025-10-08 10:07:37.191808063 +0000 UTC m=+0.225878179 container init a8df51854524af450bb4829d3e46e302f86c4e7852dfa124348c2be573c265c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:07:37 np0005475493 podman[265573]: 2025-10-08 10:07:37.199837642 +0000 UTC m=+0.149587684 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Oct  8 06:07:37 np0005475493 podman[265558]: 2025-10-08 10:07:37.204623036 +0000 UTC m=+0.238693142 container start a8df51854524af450bb4829d3e46e302f86c4e7852dfa124348c2be573c265c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:07:37 np0005475493 podman[265558]: 2025-10-08 10:07:37.244868297 +0000 UTC m=+0.278938383 container attach a8df51854524af450bb4829d3e46e302f86c4e7852dfa124348c2be573c265c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_feynman, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:07:37 np0005475493 silly_feynman[265581]: {
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:    "1": [
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:        {
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:            "devices": [
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:                "/dev/loop3"
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:            ],
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:            "lv_name": "ceph_lv0",
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:            "lv_size": "21470642176",
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:            "name": "ceph_lv0",
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:            "tags": {
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:                "ceph.cluster_name": "ceph",
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:                "ceph.crush_device_class": "",
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:                "ceph.encrypted": "0",
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:                "ceph.osd_id": "1",
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:                "ceph.type": "block",
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:                "ceph.vdo": "0",
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:                "ceph.with_tpm": "0"
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:            },
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:            "type": "block",
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:            "vg_name": "ceph_vg0"
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:        }
Oct  8 06:07:37 np0005475493 silly_feynman[265581]:    ]
Oct  8 06:07:37 np0005475493 silly_feynman[265581]: }
Oct  8 06:07:37 np0005475493 systemd[1]: libpod-a8df51854524af450bb4829d3e46e302f86c4e7852dfa124348c2be573c265c8.scope: Deactivated successfully.
Oct  8 06:07:37 np0005475493 podman[265558]: 2025-10-08 10:07:37.544776647 +0000 UTC m=+0.578846733 container died a8df51854524af450bb4829d3e46e302f86c4e7852dfa124348c2be573c265c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  8 06:07:37 np0005475493 systemd[1]: var-lib-containers-storage-overlay-df45afdd3038e5dc7ff1b803e53e79a4a98569a06fb6999202194521d791796b-merged.mount: Deactivated successfully.
Oct  8 06:07:37 np0005475493 podman[265558]: 2025-10-08 10:07:37.862781471 +0000 UTC m=+0.896851557 container remove a8df51854524af450bb4829d3e46e302f86c4e7852dfa124348c2be573c265c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_feynman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  8 06:07:37 np0005475493 systemd[1]: libpod-conmon-a8df51854524af450bb4829d3e46e302f86c4e7852dfa124348c2be573c265c8.scope: Deactivated successfully.
Oct  8 06:07:37 np0005475493 nova_compute[262220]: 2025-10-08 10:07:37.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:07:37 np0005475493 nova_compute[262220]: 2025-10-08 10:07:37.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:07:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:37 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90002000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:38 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v683: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct  8 06:07:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:38.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:38 np0005475493 podman[265708]: 2025-10-08 10:07:38.422289068 +0000 UTC m=+0.039695453 container create a43fb55bdc712d90fa2e66277700345da36fd331eaa452a1a662fb43e32dc991 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:07:38 np0005475493 systemd[1]: Started libpod-conmon-a43fb55bdc712d90fa2e66277700345da36fd331eaa452a1a662fb43e32dc991.scope.
Oct  8 06:07:38 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:07:38 np0005475493 podman[265708]: 2025-10-08 10:07:38.403361597 +0000 UTC m=+0.020768012 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:07:38 np0005475493 podman[265708]: 2025-10-08 10:07:38.520538203 +0000 UTC m=+0.137944688 container init a43fb55bdc712d90fa2e66277700345da36fd331eaa452a1a662fb43e32dc991 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wescoff, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  8 06:07:38 np0005475493 podman[265708]: 2025-10-08 10:07:38.527964053 +0000 UTC m=+0.145370438 container start a43fb55bdc712d90fa2e66277700345da36fd331eaa452a1a662fb43e32dc991 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:07:38 np0005475493 crazy_wescoff[265724]: 167 167
Oct  8 06:07:38 np0005475493 systemd[1]: libpod-a43fb55bdc712d90fa2e66277700345da36fd331eaa452a1a662fb43e32dc991.scope: Deactivated successfully.
Oct  8 06:07:38 np0005475493 podman[265708]: 2025-10-08 10:07:38.544782987 +0000 UTC m=+0.162189372 container attach a43fb55bdc712d90fa2e66277700345da36fd331eaa452a1a662fb43e32dc991 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wescoff, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True)
Oct  8 06:07:38 np0005475493 podman[265708]: 2025-10-08 10:07:38.545281792 +0000 UTC m=+0.162688177 container died a43fb55bdc712d90fa2e66277700345da36fd331eaa452a1a662fb43e32dc991 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wescoff, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  8 06:07:38 np0005475493 systemd[1]: var-lib-containers-storage-overlay-c612f9adb751a781b4e1ff705e4b0e9a971ea59168795f3113a6382b31008081-merged.mount: Deactivated successfully.
Oct  8 06:07:38 np0005475493 podman[265708]: 2025-10-08 10:07:38.619725818 +0000 UTC m=+0.237132203 container remove a43fb55bdc712d90fa2e66277700345da36fd331eaa452a1a662fb43e32dc991 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wescoff, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:07:38 np0005475493 systemd[1]: libpod-conmon-a43fb55bdc712d90fa2e66277700345da36fd331eaa452a1a662fb43e32dc991.scope: Deactivated successfully.
Oct  8 06:07:38 np0005475493 podman[265750]: 2025-10-08 10:07:38.791108885 +0000 UTC m=+0.051992911 container create bec79ca95945ec6d3512c844218570be028d19e91fcaa17448de05ff4307489d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:07:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:38.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:38 np0005475493 systemd[1]: Started libpod-conmon-bec79ca95945ec6d3512c844218570be028d19e91fcaa17448de05ff4307489d.scope.
Oct  8 06:07:38 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:07:38 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4297b4c95527b396a70fb14a991e0e46bbc5b8c622395c1a3308fc1558eb557d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:07:38 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4297b4c95527b396a70fb14a991e0e46bbc5b8c622395c1a3308fc1558eb557d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:07:38 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4297b4c95527b396a70fb14a991e0e46bbc5b8c622395c1a3308fc1558eb557d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:07:38 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4297b4c95527b396a70fb14a991e0e46bbc5b8c622395c1a3308fc1558eb557d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:07:38 np0005475493 podman[265750]: 2025-10-08 10:07:38.761513668 +0000 UTC m=+0.022397674 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:07:38 np0005475493 podman[265750]: 2025-10-08 10:07:38.86927808 +0000 UTC m=+0.130162086 container init bec79ca95945ec6d3512c844218570be028d19e91fcaa17448de05ff4307489d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:07:38 np0005475493 podman[265750]: 2025-10-08 10:07:38.876317068 +0000 UTC m=+0.137201054 container start bec79ca95945ec6d3512c844218570be028d19e91fcaa17448de05ff4307489d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_carver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  8 06:07:38 np0005475493 nova_compute[262220]: 2025-10-08 10:07:38.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:07:38 np0005475493 podman[265750]: 2025-10-08 10:07:38.901387057 +0000 UTC m=+0.162271043 container attach bec79ca95945ec6d3512c844218570be028d19e91fcaa17448de05ff4307489d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_carver, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:07:38 np0005475493 nova_compute[262220]: 2025-10-08 10:07:38.917 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:07:38 np0005475493 nova_compute[262220]: 2025-10-08 10:07:38.917 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:07:38 np0005475493 nova_compute[262220]: 2025-10-08 10:07:38.918 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:07:38 np0005475493 nova_compute[262220]: 2025-10-08 10:07:38.918 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:07:38 np0005475493 nova_compute[262220]: 2025-10-08 10:07:38.918 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:07:38 np0005475493 podman[265764]: 2025-10-08 10:07:38.947724605 +0000 UTC m=+0.118847261 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  8 06:07:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:07:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:07:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3616470647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:07:39 np0005475493 nova_compute[262220]: 2025-10-08 10:07:39.397 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:07:39 np0005475493 lvm[265884]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:07:39 np0005475493 lvm[265884]: VG ceph_vg0 finished
Oct  8 06:07:39 np0005475493 cool_carver[265767]: {}
Oct  8 06:07:39 np0005475493 systemd[1]: libpod-bec79ca95945ec6d3512c844218570be028d19e91fcaa17448de05ff4307489d.scope: Deactivated successfully.
Oct  8 06:07:39 np0005475493 systemd[1]: libpod-bec79ca95945ec6d3512c844218570be028d19e91fcaa17448de05ff4307489d.scope: Consumed 1.082s CPU time.
Oct  8 06:07:39 np0005475493 podman[265750]: 2025-10-08 10:07:39.561293629 +0000 UTC m=+0.822177615 container died bec79ca95945ec6d3512c844218570be028d19e91fcaa17448de05ff4307489d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_carver, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct  8 06:07:39 np0005475493 nova_compute[262220]: 2025-10-08 10:07:39.565 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:07:39 np0005475493 nova_compute[262220]: 2025-10-08 10:07:39.568 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4866MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:07:39 np0005475493 nova_compute[262220]: 2025-10-08 10:07:39.568 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:07:39 np0005475493 nova_compute[262220]: 2025-10-08 10:07:39.568 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:07:39 np0005475493 systemd[1]: var-lib-containers-storage-overlay-4297b4c95527b396a70fb14a991e0e46bbc5b8c622395c1a3308fc1558eb557d-merged.mount: Deactivated successfully.
Oct  8 06:07:39 np0005475493 podman[265750]: 2025-10-08 10:07:39.757698894 +0000 UTC m=+1.018582890 container remove bec79ca95945ec6d3512c844218570be028d19e91fcaa17448de05ff4307489d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Oct  8 06:07:39 np0005475493 systemd[1]: libpod-conmon-bec79ca95945ec6d3512c844218570be028d19e91fcaa17448de05ff4307489d.scope: Deactivated successfully.
Oct  8 06:07:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:07:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:07:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:07:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:07:39 np0005475493 nova_compute[262220]: 2025-10-08 10:07:39.906 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:07:39 np0005475493 nova_compute[262220]: 2025-10-08 10:07:39.906 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:07:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:39 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca00089d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:39 np0005475493 nova_compute[262220]: 2025-10-08 10:07:39.976 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:07:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v684: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct  8 06:07:40 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:07:40 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:07:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90002000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:40.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:07:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3850252230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:07:40 np0005475493 nova_compute[262220]: 2025-10-08 10:07:40.427 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:07:40 np0005475493 nova_compute[262220]: 2025-10-08 10:07:40.433 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:07:40 np0005475493 nova_compute[262220]: 2025-10-08 10:07:40.454 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:07:40 np0005475493 nova_compute[262220]: 2025-10-08 10:07:40.456 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:07:40 np0005475493 nova_compute[262220]: 2025-10-08 10:07:40.456 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:07:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:07:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:40.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:07:40 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Oct  8 06:07:40 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:40.923507) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  8 06:07:40 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Oct  8 06:07:40 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918060923544, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2124, "num_deletes": 251, "total_data_size": 4180276, "memory_usage": 4230352, "flush_reason": "Manual Compaction"}
Oct  8 06:07:40 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918061008260, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 4077896, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20001, "largest_seqno": 22124, "table_properties": {"data_size": 4068457, "index_size": 5933, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19458, "raw_average_key_size": 20, "raw_value_size": 4049613, "raw_average_value_size": 4192, "num_data_blocks": 261, "num_entries": 966, "num_filter_entries": 966, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759917843, "oldest_key_time": 1759917843, "file_creation_time": 1759918060, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 84806 microseconds, and 9017 cpu microseconds.
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.008307) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 4077896 bytes OK
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.008330) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.012711) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.012739) EVENT_LOG_v1 {"time_micros": 1759918061012733, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.012758) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4171697, prev total WAL file size 4171697, number of live WAL files 2.
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.013763) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(3982KB)], [44(12MB)]
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918061013792, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 16938325, "oldest_snapshot_seqno": -1}
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 5419 keys, 14760604 bytes, temperature: kUnknown
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918061176700, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 14760604, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14722292, "index_size": 23674, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 136667, "raw_average_key_size": 25, "raw_value_size": 14622126, "raw_average_value_size": 2698, "num_data_blocks": 976, "num_entries": 5419, "num_filter_entries": 5419, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759918061, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.177025) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 14760604 bytes
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.203849) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.9 rd, 90.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 12.3 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 5937, records dropped: 518 output_compression: NoCompression
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.203891) EVENT_LOG_v1 {"time_micros": 1759918061203876, "job": 22, "event": "compaction_finished", "compaction_time_micros": 163095, "compaction_time_cpu_micros": 26132, "output_level": 6, "num_output_files": 1, "total_output_size": 14760604, "num_input_records": 5937, "num_output_records": 5419, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918061205857, "job": 22, "event": "table_file_deletion", "file_number": 46}
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918061208739, "job": 22, "event": "table_file_deletion", "file_number": 44}
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.013663) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.208865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.208870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.208872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.208874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:07:41 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:07:41.208875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:07:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:41 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94002ee0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v685: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct  8 06:07:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca00089d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.002000065s ======
Oct  8 06:07:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:42.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000065s
Oct  8 06:07:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca00089d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:07:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:42.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:07:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:43 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:07:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v686: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct  8 06:07:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94002ee0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:44.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca00089d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:44.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:07:45] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct  8 06:07:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:07:45] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct  8 06:07:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:45 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca00089d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v687: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:07:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:46.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:07:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:46.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:07:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:07:47.105Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:07:47
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', '.mgr', 'volumes', 'vms', 'default.rgw.log', 'images', '.rgw.root', '.nfs', 'default.rgw.meta']
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:07:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:07:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:07:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:47 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca00089d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:07:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:07:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:07:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:07:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:07:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:07:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:07:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v688: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:07:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:07:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:07:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:07:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:07:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:07:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:07:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:07:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:07:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:07:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:48.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca00089d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:48.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:48 np0005475493 podman[265984]: 2025-10-08 10:07:48.888268646 +0000 UTC m=+0.052372933 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  8 06:07:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:07:49 np0005475493 ceph-mgr[73869]: [devicehealth INFO root] Check health
Oct  8 06:07:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:49 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:50 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v689: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct  8 06:07:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003800 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:50.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:50.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:51 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:52 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v690: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:07:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:52.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003800 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:52.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:53 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:07:54 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v691: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct  8 06:07:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:54.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:54.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:07:55] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  8 06:07:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:07:55] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  8 06:07:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:55 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003800 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:56 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v692: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:07:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:56.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:56 np0005475493 systemd[1]: packagekit.service: Deactivated successfully.
Oct  8 06:07:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:56.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:07:57.106Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:07:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:07:57.106Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:07:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:07:57.107Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:07:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:07:57.406 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:07:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:07:57.406 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:07:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:07:57.406 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:07:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:57 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:58 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v693: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:07:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003800 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:07:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:07:58.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:07:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:07:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:07:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:07:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:07:58.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:07:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:07:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:07:59 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:00 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v694: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct  8 06:08:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:08:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:00.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:08:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:00.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:00 np0005475493 podman[266017]: 2025-10-08 10:08:00.920729315 +0000 UTC m=+0.081370320 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:08:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:01 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:02 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v695: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:08:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:08:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:02.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:08:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:08:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:08:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:02.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:03 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003800 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:08:04 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v696: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct  8 06:08:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:04.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:04.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:08:05] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct  8 06:08:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:08:05] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct  8 06:08:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:05 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:06 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v697: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:08:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78000d90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:08:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:06.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:08:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70000d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:08:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:06.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:08:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:08:07.107Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:08:07 np0005475493 podman[266078]: 2025-10-08 10:08:07.902286323 +0000 UTC m=+0.058776890 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct  8 06:08:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:07 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:08 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v698: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:08:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:08.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac780018b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:08.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:08:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100809 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 06:08:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=cleanup t=2025-10-08T10:08:09.522414799Z level=info msg="Completed cleanup jobs" duration=86.99335ms
Oct  8 06:08:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=plugins.update.checker t=2025-10-08T10:08:09.573661374Z level=info msg="Update check succeeded" duration=58.483919ms
Oct  8 06:08:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=grafana.update.checker t=2025-10-08T10:08:09.576582359Z level=info msg="Update check succeeded" duration=61.750856ms
Oct  8 06:08:09 np0005475493 podman[266099]: 2025-10-08 10:08:09.897135746 +0000 UTC m=+0.056861908 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  8 06:08:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:09 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:10 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v699: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct  8 06:08:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:08:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:10.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:08:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:08:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:10.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:08:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:11 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac780018b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v700: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 06:08:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:12.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:12.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:13 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:08:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v701: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:08:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:14.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:14.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:08:15] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct  8 06:08:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:08:15] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct  8 06:08:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:15 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v702: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 06:08:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002950 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:16.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:16.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:08:17.108Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:08:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:08:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:08:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:08:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:08:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:17 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:18 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v703: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 06:08:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:08:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:08:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:08:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:08:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:18.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:08:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002950 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:18.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:08:19 np0005475493 podman[266129]: 2025-10-08 10:08:19.893702188 +0000 UTC m=+0.054233894 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:08:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:20 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v704: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct  8 06:08:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:20.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  8 06:08:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2658350131' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  8 06:08:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  8 06:08:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2658350131' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  8 06:08:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:08:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:20.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:08:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:21 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:08:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:21 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:08:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:21 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002950 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:22 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v705: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct  8 06:08:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:22.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:22.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:23 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:08:24 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v706: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 06:08:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003a50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:24.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 06:08:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:24.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:08:25] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 06:08:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:08:25] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  8 06:08:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:25 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v707: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Oct  8 06:08:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:08:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:26.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:08:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003a50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:26.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:08:27.108Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:08:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:27 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:28 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v708: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Oct  8 06:08:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:28.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:28.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:08:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:29 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003a50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:30 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v709: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  8 06:08:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:08:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:30.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:08:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:08:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:30.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:08:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100831 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 06:08:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:31 np0005475493 podman[266188]: 2025-10-08 10:08:31.991115646 +0000 UTC m=+0.157800070 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  8 06:08:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v710: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct  8 06:08:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003a50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:08:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:32.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:08:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:08:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:08:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:32.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:33 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:08:34 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v711: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct  8 06:08:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:34.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c001090 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:34.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:08:35] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  8 06:08:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:08:35] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  8 06:08:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:35 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:36 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v712: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:08:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:36.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94001080 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:36.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:08:37.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:08:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:08:37.109Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:08:37 np0005475493 nova_compute[262220]: 2025-10-08 10:08:37.452 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:08:37 np0005475493 nova_compute[262220]: 2025-10-08 10:08:37.452 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:08:37 np0005475493 nova_compute[262220]: 2025-10-08 10:08:37.452 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:08:37 np0005475493 nova_compute[262220]: 2025-10-08 10:08:37.453 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  8 06:08:37 np0005475493 nova_compute[262220]: 2025-10-08 10:08:37.882 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:08:37 np0005475493 nova_compute[262220]: 2025-10-08 10:08:37.899 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:08:37 np0005475493 nova_compute[262220]: 2025-10-08 10:08:37.899 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:08:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:37 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c001090 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:38 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v713: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:08:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:38.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:38 np0005475493 podman[266223]: 2025-10-08 10:08:38.881454707 +0000 UTC m=+0.047274899 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  8 06:08:38 np0005475493 nova_compute[262220]: 2025-10-08 10:08:38.885 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:08:38 np0005475493 nova_compute[262220]: 2025-10-08 10:08:38.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  8 06:08:38 np0005475493 nova_compute[262220]: 2025-10-08 10:08:38.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  8 06:08:38 np0005475493 nova_compute[262220]: 2025-10-08 10:08:38.909 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  8 06:08:38 np0005475493 nova_compute[262220]: 2025-10-08 10:08:38.910 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:08:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:38.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:38 np0005475493 nova_compute[262220]: 2025-10-08 10:08:38.934 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:08:38 np0005475493 nova_compute[262220]: 2025-10-08 10:08:38.934 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:08:38 np0005475493 nova_compute[262220]: 2025-10-08 10:08:38.934 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:08:38 np0005475493 nova_compute[262220]: 2025-10-08 10:08:38.935 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:08:38 np0005475493 nova_compute[262220]: 2025-10-08 10:08:38.935 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:08:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:08:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:08:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2761483830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:08:39 np0005475493 nova_compute[262220]: 2025-10-08 10:08:39.398 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:08:39 np0005475493 nova_compute[262220]: 2025-10-08 10:08:39.539 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:08:39 np0005475493 nova_compute[262220]: 2025-10-08 10:08:39.540 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4885MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:08:39 np0005475493 nova_compute[262220]: 2025-10-08 10:08:39.540 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:08:39 np0005475493 nova_compute[262220]: 2025-10-08 10:08:39.540 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:08:39 np0005475493 nova_compute[262220]: 2025-10-08 10:08:39.598 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:08:39 np0005475493 nova_compute[262220]: 2025-10-08 10:08:39.599 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:08:39 np0005475493 nova_compute[262220]: 2025-10-08 10:08:39.621 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:08:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:39 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94001080 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:08:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/574481894' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:08:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v714: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:08:40 np0005475493 nova_compute[262220]: 2025-10-08 10:08:40.065 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:08:40 np0005475493 nova_compute[262220]: 2025-10-08 10:08:40.071 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:08:40 np0005475493 nova_compute[262220]: 2025-10-08 10:08:40.086 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:08:40 np0005475493 nova_compute[262220]: 2025-10-08 10:08:40.088 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:08:40 np0005475493 nova_compute[262220]: 2025-10-08 10:08:40.088 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:08:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c001090 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:40.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:40 np0005475493 podman[266313]: 2025-10-08 10:08:40.32449738 +0000 UTC m=+0.060739173 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  8 06:08:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct  8 06:08:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  8 06:08:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:40.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:41 np0005475493 nova_compute[262220]: 2025-10-08 10:08:41.065 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:08:41 np0005475493 nova_compute[262220]: 2025-10-08 10:08:41.065 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:08:41 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  8 06:08:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:41 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v715: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:08:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94002480 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:42.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 06:08:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:08:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 06:08:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:08:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 06:08:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:08:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 06:08:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:08:42 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:08:42 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:08:42 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:08:42 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:08:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c001090 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:42.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct  8 06:08:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  8 06:08:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct  8 06:08:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  8 06:08:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:08:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:08:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:08:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:08:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:08:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:08:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:08:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:08:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:08:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:08:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:08:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:08:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:08:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:08:43 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  8 06:08:43 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  8 06:08:43 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:08:43 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:08:43 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:08:43 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:08:43 np0005475493 podman[266484]: 2025-10-08 10:08:43.866942874 +0000 UTC m=+0.046887456 container create c33b8b83622effa95987ae698b65f0f5a4c697e29f58eb500b94bf0992fda9f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct  8 06:08:43 np0005475493 systemd[1]: Started libpod-conmon-c33b8b83622effa95987ae698b65f0f5a4c697e29f58eb500b94bf0992fda9f4.scope.
Oct  8 06:08:43 np0005475493 podman[266484]: 2025-10-08 10:08:43.843546928 +0000 UTC m=+0.023491530 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:08:43 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:08:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:43 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:43 np0005475493 podman[266484]: 2025-10-08 10:08:43.993429691 +0000 UTC m=+0.173374293 container init c33b8b83622effa95987ae698b65f0f5a4c697e29f58eb500b94bf0992fda9f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_greider, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:08:44 np0005475493 podman[266484]: 2025-10-08 10:08:44.003289089 +0000 UTC m=+0.183233671 container start c33b8b83622effa95987ae698b65f0f5a4c697e29f58eb500b94bf0992fda9f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_greider, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:08:44 np0005475493 podman[266484]: 2025-10-08 10:08:44.008354002 +0000 UTC m=+0.188298584 container attach c33b8b83622effa95987ae698b65f0f5a4c697e29f58eb500b94bf0992fda9f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_greider, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  8 06:08:44 np0005475493 romantic_greider[266501]: 167 167
Oct  8 06:08:44 np0005475493 systemd[1]: libpod-c33b8b83622effa95987ae698b65f0f5a4c697e29f58eb500b94bf0992fda9f4.scope: Deactivated successfully.
Oct  8 06:08:44 np0005475493 podman[266484]: 2025-10-08 10:08:44.013526139 +0000 UTC m=+0.193470721 container died c33b8b83622effa95987ae698b65f0f5a4c697e29f58eb500b94bf0992fda9f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:08:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:08:44 np0005475493 systemd[1]: var-lib-containers-storage-overlay-be3c5ee330fe83beed7cf990a70374fe136c8d58c66c2aedb1b834bef817019c-merged.mount: Deactivated successfully.
Oct  8 06:08:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v716: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:08:44 np0005475493 podman[266484]: 2025-10-08 10:08:44.080453633 +0000 UTC m=+0.260398225 container remove c33b8b83622effa95987ae698b65f0f5a4c697e29f58eb500b94bf0992fda9f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  8 06:08:44 np0005475493 systemd[1]: libpod-conmon-c33b8b83622effa95987ae698b65f0f5a4c697e29f58eb500b94bf0992fda9f4.scope: Deactivated successfully.
Oct  8 06:08:44 np0005475493 podman[266525]: 2025-10-08 10:08:44.259593491 +0000 UTC m=+0.043143136 container create cc9ccd93b8e1ebb06a70a10d6cfcb2936cf087db2a95588d07242895b5f995a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_colden, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  8 06:08:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90003100 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:44 np0005475493 systemd[1]: Started libpod-conmon-cc9ccd93b8e1ebb06a70a10d6cfcb2936cf087db2a95588d07242895b5f995a4.scope.
Oct  8 06:08:44 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:08:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88217652ede564aebecae3948d5e4d134eade25ded5549aa3febd96661e0bc8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:08:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88217652ede564aebecae3948d5e4d134eade25ded5549aa3febd96661e0bc8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:08:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88217652ede564aebecae3948d5e4d134eade25ded5549aa3febd96661e0bc8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:08:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88217652ede564aebecae3948d5e4d134eade25ded5549aa3febd96661e0bc8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:08:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88217652ede564aebecae3948d5e4d134eade25ded5549aa3febd96661e0bc8d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:08:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:44.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:44 np0005475493 podman[266525]: 2025-10-08 10:08:44.243300823 +0000 UTC m=+0.026850498 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:08:44 np0005475493 podman[266525]: 2025-10-08 10:08:44.360714337 +0000 UTC m=+0.144264002 container init cc9ccd93b8e1ebb06a70a10d6cfcb2936cf087db2a95588d07242895b5f995a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_colden, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  8 06:08:44 np0005475493 podman[266525]: 2025-10-08 10:08:44.369193381 +0000 UTC m=+0.152743026 container start cc9ccd93b8e1ebb06a70a10d6cfcb2936cf087db2a95588d07242895b5f995a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  8 06:08:44 np0005475493 podman[266525]: 2025-10-08 10:08:44.377414436 +0000 UTC m=+0.160964101 container attach cc9ccd93b8e1ebb06a70a10d6cfcb2936cf087db2a95588d07242895b5f995a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_colden, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1)
Oct  8 06:08:44 np0005475493 dazzling_colden[266542]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:08:44 np0005475493 dazzling_colden[266542]: --> All data devices are unavailable
Oct  8 06:08:44 np0005475493 systemd[1]: libpod-cc9ccd93b8e1ebb06a70a10d6cfcb2936cf087db2a95588d07242895b5f995a4.scope: Deactivated successfully.
Oct  8 06:08:44 np0005475493 podman[266525]: 2025-10-08 10:08:44.73499591 +0000 UTC m=+0.518545585 container died cc9ccd93b8e1ebb06a70a10d6cfcb2936cf087db2a95588d07242895b5f995a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  8 06:08:44 np0005475493 systemd[1]: var-lib-containers-storage-overlay-88217652ede564aebecae3948d5e4d134eade25ded5549aa3febd96661e0bc8d-merged.mount: Deactivated successfully.
Oct  8 06:08:44 np0005475493 podman[266525]: 2025-10-08 10:08:44.80339612 +0000 UTC m=+0.586945765 container remove cc9ccd93b8e1ebb06a70a10d6cfcb2936cf087db2a95588d07242895b5f995a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_colden, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  8 06:08:44 np0005475493 systemd[1]: libpod-conmon-cc9ccd93b8e1ebb06a70a10d6cfcb2936cf087db2a95588d07242895b5f995a4.scope: Deactivated successfully.
Oct  8 06:08:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94002480 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:44.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:45 np0005475493 podman[266659]: 2025-10-08 10:08:45.428661952 +0000 UTC m=+0.039580300 container create ff81dbe37c26dcd359c4dfd9f68c95cb46a7740be5382983253d9f0aedd1f439 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:08:45 np0005475493 systemd[1]: Started libpod-conmon-ff81dbe37c26dcd359c4dfd9f68c95cb46a7740be5382983253d9f0aedd1f439.scope.
Oct  8 06:08:45 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:08:45 np0005475493 podman[266659]: 2025-10-08 10:08:45.412650845 +0000 UTC m=+0.023569223 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:08:45 np0005475493 podman[266659]: 2025-10-08 10:08:45.516893472 +0000 UTC m=+0.127811840 container init ff81dbe37c26dcd359c4dfd9f68c95cb46a7740be5382983253d9f0aedd1f439 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kowalevski, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  8 06:08:45 np0005475493 podman[266659]: 2025-10-08 10:08:45.52394392 +0000 UTC m=+0.134862268 container start ff81dbe37c26dcd359c4dfd9f68c95cb46a7740be5382983253d9f0aedd1f439 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kowalevski, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:08:45 np0005475493 systemd[1]: libpod-ff81dbe37c26dcd359c4dfd9f68c95cb46a7740be5382983253d9f0aedd1f439.scope: Deactivated successfully.
Oct  8 06:08:45 np0005475493 podman[266659]: 2025-10-08 10:08:45.529958345 +0000 UTC m=+0.140876693 container attach ff81dbe37c26dcd359c4dfd9f68c95cb46a7740be5382983253d9f0aedd1f439 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid)
Oct  8 06:08:45 np0005475493 determined_kowalevski[266676]: 167 167
Oct  8 06:08:45 np0005475493 conmon[266676]: conmon ff81dbe37c26dcd359c4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff81dbe37c26dcd359c4dfd9f68c95cb46a7740be5382983253d9f0aedd1f439.scope/container/memory.events
Oct  8 06:08:45 np0005475493 podman[266659]: 2025-10-08 10:08:45.531830855 +0000 UTC m=+0.142749203 container died ff81dbe37c26dcd359c4dfd9f68c95cb46a7740be5382983253d9f0aedd1f439 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kowalevski, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct  8 06:08:45 np0005475493 systemd[1]: var-lib-containers-storage-overlay-afe2181951b5189c6947a50381f8d73ab785587c791c6c934d1a6be3191d7b9c-merged.mount: Deactivated successfully.
Oct  8 06:08:45 np0005475493 podman[266659]: 2025-10-08 10:08:45.587336918 +0000 UTC m=+0.198255266 container remove ff81dbe37c26dcd359c4dfd9f68c95cb46a7740be5382983253d9f0aedd1f439 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  8 06:08:45 np0005475493 systemd[1]: libpod-conmon-ff81dbe37c26dcd359c4dfd9f68c95cb46a7740be5382983253d9f0aedd1f439.scope: Deactivated successfully.
Oct  8 06:08:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:08:45] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  8 06:08:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:08:45] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  8 06:08:45 np0005475493 podman[266698]: 2025-10-08 10:08:45.793117907 +0000 UTC m=+0.060420363 container create 2130eb7f40969e3185df25d7e9c1746704e4361d1de4c87e52bfb836918f802e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_davinci, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  8 06:08:45 np0005475493 systemd[1]: Started libpod-conmon-2130eb7f40969e3185df25d7e9c1746704e4361d1de4c87e52bfb836918f802e.scope.
Oct  8 06:08:45 np0005475493 podman[266698]: 2025-10-08 10:08:45.766614111 +0000 UTC m=+0.033916627 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:08:45 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:08:45 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8df95c32e4b5a50c21158f9051036e0110f0a9110131d879fc8a7d0af79beae3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:08:45 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8df95c32e4b5a50c21158f9051036e0110f0a9110131d879fc8a7d0af79beae3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:08:45 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8df95c32e4b5a50c21158f9051036e0110f0a9110131d879fc8a7d0af79beae3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:08:45 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8df95c32e4b5a50c21158f9051036e0110f0a9110131d879fc8a7d0af79beae3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:08:45 np0005475493 podman[266698]: 2025-10-08 10:08:45.88669407 +0000 UTC m=+0.153996576 container init 2130eb7f40969e3185df25d7e9c1746704e4361d1de4c87e52bfb836918f802e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_davinci, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:08:45 np0005475493 podman[266698]: 2025-10-08 10:08:45.893508671 +0000 UTC m=+0.160811137 container start 2130eb7f40969e3185df25d7e9c1746704e4361d1de4c87e52bfb836918f802e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct  8 06:08:45 np0005475493 podman[266698]: 2025-10-08 10:08:45.900223387 +0000 UTC m=+0.167525853 container attach 2130eb7f40969e3185df25d7e9c1746704e4361d1de4c87e52bfb836918f802e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  8 06:08:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:45 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v717: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]: {
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:    "1": [
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:        {
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:            "devices": [
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:                "/dev/loop3"
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:            ],
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:            "lv_name": "ceph_lv0",
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:            "lv_size": "21470642176",
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:            "name": "ceph_lv0",
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:            "tags": {
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:                "ceph.cluster_name": "ceph",
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:                "ceph.crush_device_class": "",
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:                "ceph.encrypted": "0",
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:                "ceph.osd_id": "1",
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:                "ceph.type": "block",
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:                "ceph.vdo": "0",
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:                "ceph.with_tpm": "0"
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:            },
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:            "type": "block",
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:            "vg_name": "ceph_vg0"
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:        }
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]:    ]
Oct  8 06:08:46 np0005475493 unruffled_davinci[266715]: }
Oct  8 06:08:46 np0005475493 systemd[1]: libpod-2130eb7f40969e3185df25d7e9c1746704e4361d1de4c87e52bfb836918f802e.scope: Deactivated successfully.
Oct  8 06:08:46 np0005475493 podman[266698]: 2025-10-08 10:08:46.196741138 +0000 UTC m=+0.464043604 container died 2130eb7f40969e3185df25d7e9c1746704e4361d1de4c87e52bfb836918f802e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_davinci, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct  8 06:08:46 np0005475493 systemd[1]: var-lib-containers-storage-overlay-8df95c32e4b5a50c21158f9051036e0110f0a9110131d879fc8a7d0af79beae3-merged.mount: Deactivated successfully.
Oct  8 06:08:46 np0005475493 podman[266698]: 2025-10-08 10:08:46.260290661 +0000 UTC m=+0.527593137 container remove 2130eb7f40969e3185df25d7e9c1746704e4361d1de4c87e52bfb836918f802e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  8 06:08:46 np0005475493 systemd[1]: libpod-conmon-2130eb7f40969e3185df25d7e9c1746704e4361d1de4c87e52bfb836918f802e.scope: Deactivated successfully.
Oct  8 06:08:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:46.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:46 np0005475493 podman[266855]: 2025-10-08 10:08:46.839332069 +0000 UTC m=+0.071553452 container create 56f2bcf0900a2d937a08557b26bf7811114708b0c28297bbef79203a54464b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  8 06:08:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94002480 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:46 np0005475493 podman[266855]: 2025-10-08 10:08:46.790103249 +0000 UTC m=+0.022324662 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:08:46 np0005475493 systemd[1]: Started libpod-conmon-56f2bcf0900a2d937a08557b26bf7811114708b0c28297bbef79203a54464b85.scope.
Oct  8 06:08:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:46.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:46 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:08:46 np0005475493 podman[266855]: 2025-10-08 10:08:46.955522604 +0000 UTC m=+0.187744007 container init 56f2bcf0900a2d937a08557b26bf7811114708b0c28297bbef79203a54464b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_galois, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  8 06:08:46 np0005475493 podman[266855]: 2025-10-08 10:08:46.961884368 +0000 UTC m=+0.194105772 container start 56f2bcf0900a2d937a08557b26bf7811114708b0c28297bbef79203a54464b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_galois, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Oct  8 06:08:46 np0005475493 zen_galois[266871]: 167 167
Oct  8 06:08:46 np0005475493 systemd[1]: libpod-56f2bcf0900a2d937a08557b26bf7811114708b0c28297bbef79203a54464b85.scope: Deactivated successfully.
Oct  8 06:08:46 np0005475493 conmon[266871]: conmon 56f2bcf0900a2d937a08 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-56f2bcf0900a2d937a08557b26bf7811114708b0c28297bbef79203a54464b85.scope/container/memory.events
Oct  8 06:08:46 np0005475493 podman[266855]: 2025-10-08 10:08:46.969292268 +0000 UTC m=+0.201513671 container attach 56f2bcf0900a2d937a08557b26bf7811114708b0c28297bbef79203a54464b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_galois, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct  8 06:08:46 np0005475493 podman[266855]: 2025-10-08 10:08:46.969653779 +0000 UTC m=+0.201875192 container died 56f2bcf0900a2d937a08557b26bf7811114708b0c28297bbef79203a54464b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:08:47 np0005475493 systemd[1]: var-lib-containers-storage-overlay-cff83e4e7b7111cd13993edb31bbbeb04e8ebbd8e70d314c0c74e2f0c2c55791-merged.mount: Deactivated successfully.
Oct  8 06:08:47 np0005475493 podman[266855]: 2025-10-08 10:08:47.022310511 +0000 UTC m=+0.254531884 container remove 56f2bcf0900a2d937a08557b26bf7811114708b0c28297bbef79203a54464b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_galois, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  8 06:08:47 np0005475493 systemd[1]: libpod-conmon-56f2bcf0900a2d937a08557b26bf7811114708b0c28297bbef79203a54464b85.scope: Deactivated successfully.
Oct  8 06:08:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:08:47.110Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:08:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:08:47.111Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:08:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:08:47.111Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:08:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=404 latency=0.003000098s ======
Oct  8 06:08:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:47.123 +0000] "GET /info HTTP/1.1" 404 152 - "python-urllib3/1.26.5" - latency=0.003000098s
Oct  8 06:08:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:08:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - - [08/Oct/2025:10:08:47.139 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.001000032s
Oct  8 06:08:47 np0005475493 podman[266897]: 2025-10-08 10:08:47.193264575 +0000 UTC m=+0.044155778 container create 8e5277bb753ff060d9258a2c04084c96d0184ea183261716a66084656a1a72b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_goodall, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:08:47 np0005475493 systemd[1]: Started libpod-conmon-8e5277bb753ff060d9258a2c04084c96d0184ea183261716a66084656a1a72b5.scope.
Oct  8 06:08:47 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:08:47 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c263d62481d85d3a63ac8098acd147d7c93f09e5ea5f47a88b98c104d393020f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:08:47 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c263d62481d85d3a63ac8098acd147d7c93f09e5ea5f47a88b98c104d393020f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:08:47 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c263d62481d85d3a63ac8098acd147d7c93f09e5ea5f47a88b98c104d393020f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:08:47 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c263d62481d85d3a63ac8098acd147d7c93f09e5ea5f47a88b98c104d393020f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:08:47 np0005475493 podman[266897]: 2025-10-08 10:08:47.170371285 +0000 UTC m=+0.021262498 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:08:47 np0005475493 podman[266897]: 2025-10-08 10:08:47.270434448 +0000 UTC m=+0.121325681 container init 8e5277bb753ff060d9258a2c04084c96d0184ea183261716a66084656a1a72b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_goodall, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:08:47 np0005475493 podman[266897]: 2025-10-08 10:08:47.277491506 +0000 UTC m=+0.128382709 container start 8e5277bb753ff060d9258a2c04084c96d0184ea183261716a66084656a1a72b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  8 06:08:47 np0005475493 podman[266897]: 2025-10-08 10:08:47.31383084 +0000 UTC m=+0.164722083 container attach 8e5277bb753ff060d9258a2c04084c96d0184ea183261716a66084656a1a72b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:08:47
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', '.rgw.root', 'backups', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'images', '.nfs', 'volumes', 'default.rgw.meta']
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:08:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:08:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:08:47 np0005475493 lvm[266988]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:08:47 np0005475493 lvm[266988]: VG ceph_vg0 finished
Oct  8 06:08:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:47 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:08:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:08:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:08:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:08:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:08:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:08:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:08:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v718: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:08:48 np0005475493 hungry_goodall[266914]: {}
Oct  8 06:08:48 np0005475493 systemd[1]: libpod-8e5277bb753ff060d9258a2c04084c96d0184ea183261716a66084656a1a72b5.scope: Deactivated successfully.
Oct  8 06:08:48 np0005475493 systemd[1]: libpod-8e5277bb753ff060d9258a2c04084c96d0184ea183261716a66084656a1a72b5.scope: Consumed 1.152s CPU time.
Oct  8 06:08:48 np0005475493 podman[266897]: 2025-10-08 10:08:48.088654384 +0000 UTC m=+0.939545607 container died 8e5277bb753ff060d9258a2c04084c96d0184ea183261716a66084656a1a72b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  8 06:08:48 np0005475493 systemd[1]: var-lib-containers-storage-overlay-c263d62481d85d3a63ac8098acd147d7c93f09e5ea5f47a88b98c104d393020f-merged.mount: Deactivated successfully.
Oct  8 06:08:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:08:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:08:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:08:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:08:48 np0005475493 podman[266897]: 2025-10-08 10:08:48.164578837 +0000 UTC m=+1.015470040 container remove 8e5277bb753ff060d9258a2c04084c96d0184ea183261716a66084656a1a72b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  8 06:08:48 np0005475493 systemd[1]: libpod-conmon-8e5277bb753ff060d9258a2c04084c96d0184ea183261716a66084656a1a72b5.scope: Deactivated successfully.
Oct  8 06:08:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:08:48 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:08:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:08:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:08:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:08:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:08:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:08:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:08:48 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:08:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:48.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:48.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:08:49 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:08:49 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:08:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:49 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94002480 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:50 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v719: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct  8 06:08:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:50.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:50 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Oct  8 06:08:50 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Oct  8 06:08:50 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Oct  8 06:08:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:50 np0005475493 podman[267031]: 2025-10-08 10:08:50.905869425 +0000 UTC m=+0.058905393 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct  8 06:08:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:50.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Oct  8 06:08:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Oct  8 06:08:51 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Oct  8 06:08:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:51 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:52 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v722: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Oct  8 06:08:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:08:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:52.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:08:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Oct  8 06:08:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Oct  8 06:08:52 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Oct  8 06:08:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:52.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:53 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:08:54 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v724: 353 pgs: 353 active+clean; 21 MiB data, 174 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 32 op/s
Oct  8 06:08:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:54.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:54.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:08:55] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct  8 06:08:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:08:55] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Oct  8 06:08:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Oct  8 06:08:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Oct  8 06:08:55 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Oct  8 06:08:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:55 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:56 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v726: 353 pgs: 353 active+clean; 21 MiB data, 174 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 3.7 MiB/s wr, 34 op/s
Oct  8 06:08:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:56.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:08:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:56.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:08:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:08:57.112Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:08:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:08:57.406 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:08:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:08:57.407 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:08:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:08:57.407 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:08:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:57 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:58 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v727: 353 pgs: 353 active+clean; 21 MiB data, 174 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.1 MiB/s wr, 28 op/s
Oct  8 06:08:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:08:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:08:58.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:08:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:08:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:08:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:08:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:08:58.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:08:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:08:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Oct  8 06:08:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Oct  8 06:08:59 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Oct  8 06:08:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100859 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 06:08:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:08:59 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:00 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v729: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 5.5 MiB/s wr, 51 op/s
Oct  8 06:09:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:09:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:00.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:09:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:00.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:01 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:02 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v730: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.6 MiB/s wr, 24 op/s
Oct  8 06:09:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:09:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:02.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:09:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:09:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:09:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:02 np0005475493 podman[267065]: 2025-10-08 10:09:02.914843458 +0000 UTC m=+0.079572352 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  8 06:09:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:02.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:03 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:09:04 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v731: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.5 MiB/s wr, 23 op/s
Oct  8 06:09:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:04.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:09:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:04.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:09:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:09:05] "GET /metrics HTTP/1.1" 200 48402 "" "Prometheus/2.51.0"
Oct  8 06:09:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:09:05] "GET /metrics HTTP/1.1" 200 48402 "" "Prometheus/2.51.0"
Oct  8 06:09:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:05 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:06 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v732: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Oct  8 06:09:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:06.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac780012c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:09:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:06.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:09:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:09:07.113Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:09:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:07 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:08 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v733: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Oct  8 06:09:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:09:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:08.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002180 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:09:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:08.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:09:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:09:09 np0005475493 podman[267127]: 2025-10-08 10:09:09.885888006 +0000 UTC m=+0.046734540 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Oct  8 06:09:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:09 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0001320 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:10 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v734: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 652 B/s wr, 2 op/s
Oct  8 06:09:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0030a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:10.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:10 np0005475493 podman[267148]: 2025-10-08 10:09:10.8897359 +0000 UTC m=+0.056409504 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd)
Oct  8 06:09:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:10.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:11 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:09:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:11 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:09:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:11 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:09:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:11 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002180 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v735: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct  8 06:09:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:09:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:12.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:09:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002180 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:12.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:13 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0001320 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:09:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v736: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 06:09:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 06:09:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0041a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:14.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:14 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:14.447 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  8 06:09:14 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:14.448 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  8 06:09:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:14.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:09:15] "GET /metrics HTTP/1.1" 200 48402 "" "Prometheus/2.51.0"
Oct  8 06:09:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:09:15] "GET /metrics HTTP/1.1" 200 48402 "" "Prometheus/2.51.0"
Oct  8 06:09:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:15 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002180 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v737: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 06:09:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0001320 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:16.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0041a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:16.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:09:17.115Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:09:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:09:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:09:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:09:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:09:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:17 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:18 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v738: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Oct  8 06:09:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:09:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:09:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:09:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:09:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:18.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0001320 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:18.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:09:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0041a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:20 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v739: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Oct  8 06:09:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:09:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:20.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:09:20 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:20.449 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:09:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:20.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/100921 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 06:09:21 np0005475493 podman[267181]: 2025-10-08 10:09:21.913501619 +0000 UTC m=+0.061850119 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  8 06:09:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:21 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0001320 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:22 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v740: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Oct  8 06:09:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0041a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:22.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:22.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:23 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:09:24 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v741: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Oct  8 06:09:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:09:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:24.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:09:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0041a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:24.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:09:25] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Oct  8 06:09:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:09:25] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Oct  8 06:09:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:25 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v742: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:09:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:26.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:26.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:09:27.116Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:09:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:09:27.117Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:09:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0041a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:28 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v743: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:09:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:28.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:09:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:28.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.062238) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918169062320, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1237, "num_deletes": 252, "total_data_size": 2186030, "memory_usage": 2218920, "flush_reason": "Manual Compaction"}
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918169071741, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1384368, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22125, "largest_seqno": 23361, "table_properties": {"data_size": 1379522, "index_size": 2242, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12185, "raw_average_key_size": 20, "raw_value_size": 1369109, "raw_average_value_size": 2320, "num_data_blocks": 97, "num_entries": 590, "num_filter_entries": 590, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759918061, "oldest_key_time": 1759918061, "file_creation_time": 1759918169, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 9525 microseconds, and 4327 cpu microseconds.
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.071799) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1384368 bytes OK
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.071820) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.073391) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.073407) EVENT_LOG_v1 {"time_micros": 1759918169073402, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.073426) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2180563, prev total WAL file size 2180563, number of live WAL files 2.
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.074185) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373535' seq:0, type:0; will stop at (end)
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1351KB)], [47(14MB)]
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918169074255, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 16144972, "oldest_snapshot_seqno": -1}
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5531 keys, 12783059 bytes, temperature: kUnknown
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918169140184, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 12783059, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12746983, "index_size": 21118, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13893, "raw_key_size": 139386, "raw_average_key_size": 25, "raw_value_size": 12647900, "raw_average_value_size": 2286, "num_data_blocks": 862, "num_entries": 5531, "num_filter_entries": 5531, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759918169, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.140463) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 12783059 bytes
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.141527) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 244.5 rd, 193.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 14.1 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(20.9) write-amplify(9.2) OK, records in: 6009, records dropped: 478 output_compression: NoCompression
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.141545) EVENT_LOG_v1 {"time_micros": 1759918169141537, "job": 24, "event": "compaction_finished", "compaction_time_micros": 66022, "compaction_time_cpu_micros": 29801, "output_level": 6, "num_output_files": 1, "total_output_size": 12783059, "num_input_records": 6009, "num_output_records": 5531, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918169141950, "job": 24, "event": "table_file_deletion", "file_number": 49}
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918169144480, "job": 24, "event": "table_file_deletion", "file_number": 47}
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.074108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.144629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.144636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.144644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.144646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:09:29 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:29.144648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:09:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:30 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v744: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct  8 06:09:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0041a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:09:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:30.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:09:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:30.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v745: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:09:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:09:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:32.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:09:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:09:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:09:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0041a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:32.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:33 np0005475493 nova_compute[262220]: 2025-10-08 10:09:33.138 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "f49b788e-70d1-4bc2-9f90-381017f2b232" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:09:33 np0005475493 nova_compute[262220]: 2025-10-08 10:09:33.138 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:09:33 np0005475493 nova_compute[262220]: 2025-10-08 10:09:33.161 2 DEBUG nova.compute.manager [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  8 06:09:33 np0005475493 nova_compute[262220]: 2025-10-08 10:09:33.255 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:09:33 np0005475493 nova_compute[262220]: 2025-10-08 10:09:33.256 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:09:33 np0005475493 nova_compute[262220]: 2025-10-08 10:09:33.264 2 DEBUG nova.virt.hardware [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  8 06:09:33 np0005475493 nova_compute[262220]: 2025-10-08 10:09:33.265 2 INFO nova.compute.claims [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  8 06:09:33 np0005475493 nova_compute[262220]: 2025-10-08 10:09:33.389 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:09:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:09:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/750286088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:09:33 np0005475493 nova_compute[262220]: 2025-10-08 10:09:33.817 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:09:33 np0005475493 nova_compute[262220]: 2025-10-08 10:09:33.822 2 DEBUG nova.compute.provider_tree [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:09:33 np0005475493 nova_compute[262220]: 2025-10-08 10:09:33.836 2 DEBUG nova.scheduler.client.report [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:09:33 np0005475493 nova_compute[262220]: 2025-10-08 10:09:33.863 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:09:33 np0005475493 nova_compute[262220]: 2025-10-08 10:09:33.864 2 DEBUG nova.compute.manager [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  8 06:09:33 np0005475493 nova_compute[262220]: 2025-10-08 10:09:33.910 2 DEBUG nova.compute.manager [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  8 06:09:33 np0005475493 nova_compute[262220]: 2025-10-08 10:09:33.910 2 DEBUG nova.network.neutron [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  8 06:09:33 np0005475493 podman[267262]: 2025-10-08 10:09:33.922638005 +0000 UTC m=+0.085491943 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:09:33 np0005475493 nova_compute[262220]: 2025-10-08 10:09:33.937 2 INFO nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  8 06:09:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:34 np0005475493 nova_compute[262220]: 2025-10-08 10:09:34.027 2 DEBUG nova.compute.manager [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  8 06:09:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:09:34 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v746: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:09:34 np0005475493 nova_compute[262220]: 2025-10-08 10:09:34.115 2 DEBUG nova.compute.manager [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  8 06:09:34 np0005475493 nova_compute[262220]: 2025-10-08 10:09:34.118 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  8 06:09:34 np0005475493 nova_compute[262220]: 2025-10-08 10:09:34.118 2 INFO nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Creating image(s)#033[00m
Oct  8 06:09:34 np0005475493 nova_compute[262220]: 2025-10-08 10:09:34.162 2 DEBUG nova.storage.rbd_utils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image f49b788e-70d1-4bc2-9f90-381017f2b232_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:09:34 np0005475493 nova_compute[262220]: 2025-10-08 10:09:34.205 2 DEBUG nova.storage.rbd_utils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image f49b788e-70d1-4bc2-9f90-381017f2b232_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:09:34 np0005475493 nova_compute[262220]: 2025-10-08 10:09:34.251 2 DEBUG nova.storage.rbd_utils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image f49b788e-70d1-4bc2-9f90-381017f2b232_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:09:34 np0005475493 nova_compute[262220]: 2025-10-08 10:09:34.254 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "3cde70359534d4758cf71011630bd1fb14a90c92" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:09:34 np0005475493 nova_compute[262220]: 2025-10-08 10:09:34.255 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "3cde70359534d4758cf71011630bd1fb14a90c92" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:09:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:34.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:34 np0005475493 nova_compute[262220]: 2025-10-08 10:09:34.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:09:34 np0005475493 nova_compute[262220]: 2025-10-08 10:09:34.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  8 06:09:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:34 np0005475493 nova_compute[262220]: 2025-10-08 10:09:34.914 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  8 06:09:34 np0005475493 nova_compute[262220]: 2025-10-08 10:09:34.915 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:09:34 np0005475493 nova_compute[262220]: 2025-10-08 10:09:34.915 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  8 06:09:34 np0005475493 nova_compute[262220]: 2025-10-08 10:09:34.930 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:09:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:35.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:35 np0005475493 nova_compute[262220]: 2025-10-08 10:09:35.009 2 WARNING oslo_policy.policy [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct  8 06:09:35 np0005475493 nova_compute[262220]: 2025-10-08 10:09:35.010 2 WARNING oslo_policy.policy [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct  8 06:09:35 np0005475493 nova_compute[262220]: 2025-10-08 10:09:35.012 2 DEBUG nova.policy [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd50b19166a7245e390a6e29682191263', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  8 06:09:35 np0005475493 nova_compute[262220]: 2025-10-08 10:09:35.229 2 DEBUG nova.virt.libvirt.imagebackend [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Image locations are: [{'url': 'rbd://787292cc-8154-50c4-9e00-e9be3e817149/images/e5994bac-385d-4cfe-962e-386aa0559983/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://787292cc-8154-50c4-9e00-e9be3e817149/images/e5994bac-385d-4cfe-962e-386aa0559983/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  8 06:09:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:09:35] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Oct  8 06:09:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:09:35] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Oct  8 06:09:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0041a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:36 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v747: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:09:36 np0005475493 nova_compute[262220]: 2025-10-08 10:09:36.165 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:09:36 np0005475493 nova_compute[262220]: 2025-10-08 10:09:36.220 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92.part --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:09:36 np0005475493 nova_compute[262220]: 2025-10-08 10:09:36.221 2 DEBUG nova.virt.images [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] e5994bac-385d-4cfe-962e-386aa0559983 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct  8 06:09:36 np0005475493 nova_compute[262220]: 2025-10-08 10:09:36.222 2 DEBUG nova.privsep.utils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  8 06:09:36 np0005475493 nova_compute[262220]: 2025-10-08 10:09:36.222 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92.part /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:09:36 np0005475493 nova_compute[262220]: 2025-10-08 10:09:36.246 2 DEBUG nova.network.neutron [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Successfully created port: d6bc221b-bf28-4c61-b116-cd61209c7f31 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  8 06:09:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:09:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:36.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:09:36 np0005475493 nova_compute[262220]: 2025-10-08 10:09:36.402 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92.part /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92.converted" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:09:36 np0005475493 nova_compute[262220]: 2025-10-08 10:09:36.406 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:09:36 np0005475493 nova_compute[262220]: 2025-10-08 10:09:36.457 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92.converted --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:09:36 np0005475493 nova_compute[262220]: 2025-10-08 10:09:36.458 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "3cde70359534d4758cf71011630bd1fb14a90c92" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:09:36 np0005475493 nova_compute[262220]: 2025-10-08 10:09:36.481 2 DEBUG nova.storage.rbd_utils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image f49b788e-70d1-4bc2-9f90-381017f2b232_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:09:36 np0005475493 nova_compute[262220]: 2025-10-08 10:09:36.484 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 f49b788e-70d1-4bc2-9f90-381017f2b232_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:09:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:37.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:09:37.118Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:09:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Oct  8 06:09:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Oct  8 06:09:37 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Oct  8 06:09:37 np0005475493 nova_compute[262220]: 2025-10-08 10:09:37.941 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:09:37 np0005475493 nova_compute[262220]: 2025-10-08 10:09:37.941 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:09:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:38 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v749: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Oct  8 06:09:38 np0005475493 nova_compute[262220]: 2025-10-08 10:09:38.137 2 DEBUG nova.network.neutron [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Successfully updated port: d6bc221b-bf28-4c61-b116-cd61209c7f31 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  8 06:09:38 np0005475493 nova_compute[262220]: 2025-10-08 10:09:38.154 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:09:38 np0005475493 nova_compute[262220]: 2025-10-08 10:09:38.154 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquired lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:09:38 np0005475493 nova_compute[262220]: 2025-10-08 10:09:38.154 2 DEBUG nova.network.neutron [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  8 06:09:38 np0005475493 nova_compute[262220]: 2025-10-08 10:09:38.327 2 DEBUG nova.network.neutron [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  8 06:09:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c0041a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:38.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Oct  8 06:09:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Oct  8 06:09:38 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Oct  8 06:09:38 np0005475493 nova_compute[262220]: 2025-10-08 10:09:38.647 2 DEBUG nova.compute.manager [req-c45e44aa-067a-47f2-9451-acbb30ef3e47 req-0ba52fe7-fbc1-4e50-a828-f8f659a55463 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Received event network-changed-d6bc221b-bf28-4c61-b116-cd61209c7f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:09:38 np0005475493 nova_compute[262220]: 2025-10-08 10:09:38.648 2 DEBUG nova.compute.manager [req-c45e44aa-067a-47f2-9451-acbb30ef3e47 req-0ba52fe7-fbc1-4e50-a828-f8f659a55463 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Refreshing instance network info cache due to event network-changed-d6bc221b-bf28-4c61-b116-cd61209c7f31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  8 06:09:38 np0005475493 nova_compute[262220]: 2025-10-08 10:09:38.648 2 DEBUG oslo_concurrency.lockutils [req-c45e44aa-067a-47f2-9451-acbb30ef3e47 req-0ba52fe7-fbc1-4e50-a828-f8f659a55463 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:09:38 np0005475493 nova_compute[262220]: 2025-10-08 10:09:38.764 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 f49b788e-70d1-4bc2-9f90-381017f2b232_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:09:38 np0005475493 nova_compute[262220]: 2025-10-08 10:09:38.842 2 DEBUG nova.storage.rbd_utils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] resizing rbd image f49b788e-70d1-4bc2-9f90-381017f2b232_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  8 06:09:38 np0005475493 nova_compute[262220]: 2025-10-08 10:09:38.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:09:38 np0005475493 nova_compute[262220]: 2025-10-08 10:09:38.888 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:09:38 np0005475493 nova_compute[262220]: 2025-10-08 10:09:38.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  8 06:09:38 np0005475493 nova_compute[262220]: 2025-10-08 10:09:38.888 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:09:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:38 np0005475493 nova_compute[262220]: 2025-10-08 10:09:38.911 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:09:38 np0005475493 nova_compute[262220]: 2025-10-08 10:09:38.912 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:09:38 np0005475493 nova_compute[262220]: 2025-10-08 10:09:38.912 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:09:38 np0005475493 nova_compute[262220]: 2025-10-08 10:09:38.913 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:09:38 np0005475493 nova_compute[262220]: 2025-10-08 10:09:38.913 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:09:38 np0005475493 nova_compute[262220]: 2025-10-08 10:09:38.973 2 DEBUG nova.objects.instance [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'migration_context' on Instance uuid f49b788e-70d1-4bc2-9f90-381017f2b232 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  8 06:09:38 np0005475493 nova_compute[262220]: 2025-10-08 10:09:38.988 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  8 06:09:38 np0005475493 nova_compute[262220]: 2025-10-08 10:09:38.989 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Ensure instance console log exists: /var/lib/nova/instances/f49b788e-70d1-4bc2-9f90-381017f2b232/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  8 06:09:38 np0005475493 nova_compute[262220]: 2025-10-08 10:09:38.989 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:09:38 np0005475493 nova_compute[262220]: 2025-10-08 10:09:38.990 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:09:38 np0005475493 nova_compute[262220]: 2025-10-08 10:09:38.990 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:09:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:39.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:09:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:09:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4142038770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.357 2 DEBUG nova.network.neutron [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Updating instance_info_cache with network_info: [{"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.362 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.380 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Releasing lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.380 2 DEBUG nova.compute.manager [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Instance network_info: |[{"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.381 2 DEBUG oslo_concurrency.lockutils [req-c45e44aa-067a-47f2-9451-acbb30ef3e47 req-0ba52fe7-fbc1-4e50-a828-f8f659a55463 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.381 2 DEBUG nova.network.neutron [req-c45e44aa-067a-47f2-9451-acbb30ef3e47 req-0ba52fe7-fbc1-4e50-a828-f8f659a55463 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Refreshing network info cache for port d6bc221b-bf28-4c61-b116-cd61209c7f31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.383 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Start _get_guest_xml network_info=[{"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T10:08:49Z,direct_url=<?>,disk_format='qcow2',id=e5994bac-385d-4cfe-962e-386aa0559983,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9bebada0871a4efa9df99c6beff34c13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T10:08:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'device_type': 'disk', 'size': 0, 'image_id': 'e5994bac-385d-4cfe-962e-386aa0559983'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.391 2 WARNING nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.397 2 DEBUG nova.virt.libvirt.host [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.398 2 DEBUG nova.virt.libvirt.host [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.404 2 DEBUG nova.virt.libvirt.host [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.405 2 DEBUG nova.virt.libvirt.host [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.406 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.406 2 DEBUG nova.virt.hardware [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-08T10:08:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='461f98d6-ae65-4f86-8ae2-cc3cfaea2a46',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T10:08:49Z,direct_url=<?>,disk_format='qcow2',id=e5994bac-385d-4cfe-962e-386aa0559983,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9bebada0871a4efa9df99c6beff34c13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T10:08:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.407 2 DEBUG nova.virt.hardware [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.407 2 DEBUG nova.virt.hardware [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.407 2 DEBUG nova.virt.hardware [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.407 2 DEBUG nova.virt.hardware [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.408 2 DEBUG nova.virt.hardware [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.408 2 DEBUG nova.virt.hardware [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.408 2 DEBUG nova.virt.hardware [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.408 2 DEBUG nova.virt.hardware [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.408 2 DEBUG nova.virt.hardware [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.409 2 DEBUG nova.virt.hardware [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.412 2 DEBUG nova.privsep.utils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.412 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.590 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.592 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4851MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.593 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.593 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.688 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Instance f49b788e-70d1-4bc2-9f90-381017f2b232 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.689 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.689 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.737 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing inventories for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.823 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating ProviderTree inventory for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.824 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating inventory in ProviderTree for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.839 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing aggregate associations for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.858 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing trait associations for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2, traits: HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,HW_CPU_X86_SSE2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  8 06:09:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  8 06:09:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2611300459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.889 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.915 2 DEBUG nova.storage.rbd_utils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image f49b788e-70d1-4bc2-9f90-381017f2b232_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.918 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:09:39 np0005475493 nova_compute[262220]: 2025-10-08 10:09:39.933 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:09:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v751: 353 pgs: 353 active+clean; 74 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.5 MiB/s wr, 40 op/s
Oct  8 06:09:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  8 06:09:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/771362139' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.368 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.370 2 DEBUG nova.virt.libvirt.vif [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-08T10:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1358472667',display_name='tempest-TestNetworkBasicOps-server-1358472667',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1358472667',id=1,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGCqOiRkCvMZRP8fdEWleadJa9k0DhfKx++pZ4blF3y05LQ1KZbyE4MTPNAMp9BRrBdK92MH6DC+pII7aGjodGwK7AspsjQ0hDDswc17pIZ089tmxUxos+hWl7sAULow5Q==',key_name='tempest-TestNetworkBasicOps-1893605271',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-50tfjz8b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T10:09:34Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=f49b788e-70d1-4bc2-9f90-381017f2b232,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.370 2 DEBUG nova.network.os_vif_util [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.371 2 DEBUG nova.network.os_vif_util [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:d1:5c,bridge_name='br-int',has_traffic_filtering=True,id=d6bc221b-bf28-4c61-b116-cd61209c7f31,network=Network(f5c6f88b-41ed-45ea-b491-931be9a75138),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6bc221b-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.373 2 DEBUG nova.objects.instance [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'pci_devices' on Instance uuid f49b788e-70d1-4bc2-9f90-381017f2b232 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  8 06:09:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:40.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.387 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] End _get_guest_xml xml=<domain type="kvm">
Oct  8 06:09:40 np0005475493 nova_compute[262220]:  <uuid>f49b788e-70d1-4bc2-9f90-381017f2b232</uuid>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:  <name>instance-00000001</name>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:  <memory>131072</memory>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:  <vcpu>1</vcpu>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:  <metadata>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <nova:name>tempest-TestNetworkBasicOps-server-1358472667</nova:name>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <nova:creationTime>2025-10-08 10:09:39</nova:creationTime>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <nova:flavor name="m1.nano">
Oct  8 06:09:40 np0005475493 nova_compute[262220]:        <nova:memory>128</nova:memory>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:        <nova:disk>1</nova:disk>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:        <nova:swap>0</nova:swap>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:        <nova:ephemeral>0</nova:ephemeral>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:        <nova:vcpus>1</nova:vcpus>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      </nova:flavor>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <nova:owner>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:        <nova:user uuid="d50b19166a7245e390a6e29682191263">tempest-TestNetworkBasicOps-139500885-project-member</nova:user>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:        <nova:project uuid="0edb2c85b88b4b168a4b2e8c5ed4a05c">tempest-TestNetworkBasicOps-139500885</nova:project>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      </nova:owner>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <nova:root type="image" uuid="e5994bac-385d-4cfe-962e-386aa0559983"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <nova:ports>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:        <nova:port uuid="d6bc221b-bf28-4c61-b116-cd61209c7f31">
Oct  8 06:09:40 np0005475493 nova_compute[262220]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:        </nova:port>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      </nova:ports>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    </nova:instance>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:  </metadata>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:  <sysinfo type="smbios">
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <system>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <entry name="manufacturer">RDO</entry>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <entry name="product">OpenStack Compute</entry>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <entry name="serial">f49b788e-70d1-4bc2-9f90-381017f2b232</entry>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <entry name="uuid">f49b788e-70d1-4bc2-9f90-381017f2b232</entry>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <entry name="family">Virtual Machine</entry>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    </system>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:  </sysinfo>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:  <os>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <boot dev="hd"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <smbios mode="sysinfo"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:  </os>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:  <features>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <acpi/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <apic/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <vmcoreinfo/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:  </features>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:  <clock offset="utc">
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <timer name="pit" tickpolicy="delay"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <timer name="hpet" present="no"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:  </clock>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:  <cpu mode="host-model" match="exact">
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <topology sockets="1" cores="1" threads="1"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:  </cpu>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:  <devices>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <disk type="network" device="disk">
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <driver type="raw" cache="none"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <source protocol="rbd" name="vms/f49b788e-70d1-4bc2-9f90-381017f2b232_disk">
Oct  8 06:09:40 np0005475493 nova_compute[262220]:        <host name="192.168.122.100" port="6789"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:        <host name="192.168.122.102" port="6789"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:        <host name="192.168.122.101" port="6789"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      </source>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <auth username="openstack">
Oct  8 06:09:40 np0005475493 nova_compute[262220]:        <secret type="ceph" uuid="787292cc-8154-50c4-9e00-e9be3e817149"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      </auth>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <target dev="vda" bus="virtio"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    </disk>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <disk type="network" device="cdrom">
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <driver type="raw" cache="none"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <source protocol="rbd" name="vms/f49b788e-70d1-4bc2-9f90-381017f2b232_disk.config">
Oct  8 06:09:40 np0005475493 nova_compute[262220]:        <host name="192.168.122.100" port="6789"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:        <host name="192.168.122.102" port="6789"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:        <host name="192.168.122.101" port="6789"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      </source>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <auth username="openstack">
Oct  8 06:09:40 np0005475493 nova_compute[262220]:        <secret type="ceph" uuid="787292cc-8154-50c4-9e00-e9be3e817149"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      </auth>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <target dev="sda" bus="sata"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    </disk>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <interface type="ethernet">
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <mac address="fa:16:3e:9d:d1:5c"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <model type="virtio"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <driver name="vhost" rx_queue_size="512"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <mtu size="1442"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <target dev="tapd6bc221b-bf"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    </interface>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <serial type="pty">
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <log file="/var/lib/nova/instances/f49b788e-70d1-4bc2-9f90-381017f2b232/console.log" append="off"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    </serial>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <video>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <model type="virtio"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    </video>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <input type="tablet" bus="usb"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <rng model="virtio">
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <backend model="random">/dev/urandom</backend>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    </rng>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <controller type="usb" index="0"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    <memballoon model="virtio">
Oct  8 06:09:40 np0005475493 nova_compute[262220]:      <stats period="10"/>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:    </memballoon>
Oct  8 06:09:40 np0005475493 nova_compute[262220]:  </devices>
Oct  8 06:09:40 np0005475493 nova_compute[262220]: </domain>
Oct  8 06:09:40 np0005475493 nova_compute[262220]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.388 2 DEBUG nova.compute.manager [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Preparing to wait for external event network-vif-plugged-d6bc221b-bf28-4c61-b116-cd61209c7f31 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.389 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.389 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.389 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.390 2 DEBUG nova.virt.libvirt.vif [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-08T10:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1358472667',display_name='tempest-TestNetworkBasicOps-server-1358472667',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1358472667',id=1,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGCqOiRkCvMZRP8fdEWleadJa9k0DhfKx++pZ4blF3y05LQ1KZbyE4MTPNAMp9BRrBdK92MH6DC+pII7aGjodGwK7AspsjQ0hDDswc17pIZ089tmxUxos+hWl7sAULow5Q==',key_name='tempest-TestNetworkBasicOps-1893605271',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-50tfjz8b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T10:09:34Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=f49b788e-70d1-4bc2-9f90-381017f2b232,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.390 2 DEBUG nova.network.os_vif_util [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.390 2 DEBUG nova.network.os_vif_util [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:d1:5c,bridge_name='br-int',has_traffic_filtering=True,id=d6bc221b-bf28-4c61-b116-cd61209c7f31,network=Network(f5c6f88b-41ed-45ea-b491-931be9a75138),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6bc221b-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.391 2 DEBUG os_vif [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:d1:5c,bridge_name='br-int',has_traffic_filtering=True,id=d6bc221b-bf28-4c61-b116-cd61209c7f31,network=Network(f5c6f88b-41ed-45ea-b491-931be9a75138),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6bc221b-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  8 06:09:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:09:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2857050921' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.418 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.423 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating inventory in ProviderTree for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.459 2 DEBUG ovsdbapp.backend.ovs_idl [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.459 2 DEBUG ovsdbapp.backend.ovs_idl [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.459 2 DEBUG ovsdbapp.backend.ovs_idl [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.474 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updated inventory for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.474 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.474 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating inventory in ProviderTree for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.478 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.479 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.480 2 INFO oslo.privsep.daemon [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmprzdv9b44/privsep.sock']#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.495 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.495 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.674 2 DEBUG nova.network.neutron [req-c45e44aa-067a-47f2-9451-acbb30ef3e47 req-0ba52fe7-fbc1-4e50-a828-f8f659a55463 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Updated VIF entry in instance network info cache for port d6bc221b-bf28-4c61-b116-cd61209c7f31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.675 2 DEBUG nova.network.neutron [req-c45e44aa-067a-47f2-9451-acbb30ef3e47 req-0ba52fe7-fbc1-4e50-a828-f8f659a55463 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Updating instance_info_cache with network_info: [{"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:09:40 np0005475493 nova_compute[262220]: 2025-10-08 10:09:40.690 2 DEBUG oslo_concurrency.lockutils [req-c45e44aa-067a-47f2-9451-acbb30ef3e47 req-0ba52fe7-fbc1-4e50-a828-f8f659a55463 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:09:40 np0005475493 podman[267584]: 2025-10-08 10:09:40.888703337 +0000 UTC m=+0.050065955 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  8 06:09:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:41.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:41 np0005475493 nova_compute[262220]: 2025-10-08 10:09:41.165 2 INFO oslo.privsep.daemon [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct  8 06:09:41 np0005475493 nova_compute[262220]: 2025-10-08 10:09:41.047 565 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  8 06:09:41 np0005475493 nova_compute[262220]: 2025-10-08 10:09:41.051 565 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  8 06:09:41 np0005475493 nova_compute[262220]: 2025-10-08 10:09:41.053 565 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Oct  8 06:09:41 np0005475493 nova_compute[262220]: 2025-10-08 10:09:41.053 565 INFO oslo.privsep.daemon [-] privsep daemon running as pid 565#033[00m
Oct  8 06:09:41 np0005475493 nova_compute[262220]: 2025-10-08 10:09:41.495 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:09:41 np0005475493 nova_compute[262220]: 2025-10-08 10:09:41.495 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  8 06:09:41 np0005475493 nova_compute[262220]: 2025-10-08 10:09:41.495 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  8 06:09:41 np0005475493 nova_compute[262220]: 2025-10-08 10:09:41.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:09:41 np0005475493 nova_compute[262220]: 2025-10-08 10:09:41.498 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6bc221b-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:09:41 np0005475493 nova_compute[262220]: 2025-10-08 10:09:41.498 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd6bc221b-bf, col_values=(('external_ids', {'iface-id': 'd6bc221b-bf28-4c61-b116-cd61209c7f31', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:d1:5c', 'vm-uuid': 'f49b788e-70d1-4bc2-9f90-381017f2b232'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:09:41 np0005475493 nova_compute[262220]: 2025-10-08 10:09:41.534 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  8 06:09:41 np0005475493 nova_compute[262220]: 2025-10-08 10:09:41.535 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  8 06:09:41 np0005475493 nova_compute[262220]: 2025-10-08 10:09:41.535 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:09:41 np0005475493 nova_compute[262220]: 2025-10-08 10:09:41.536 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:09:41 np0005475493 nova_compute[262220]: 2025-10-08 10:09:41.536 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:09:41 np0005475493 nova_compute[262220]: 2025-10-08 10:09:41.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:09:41 np0005475493 NetworkManager[44872]: <info>  [1759918181.5483] manager: (tapd6bc221b-bf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Oct  8 06:09:41 np0005475493 nova_compute[262220]: 2025-10-08 10:09:41.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  8 06:09:41 np0005475493 nova_compute[262220]: 2025-10-08 10:09:41.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:09:41 np0005475493 nova_compute[262220]: 2025-10-08 10:09:41.555 2 INFO os_vif [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:d1:5c,bridge_name='br-int',has_traffic_filtering=True,id=d6bc221b-bf28-4c61-b116-cd61209c7f31,network=Network(f5c6f88b-41ed-45ea-b491-931be9a75138),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6bc221b-bf')#033[00m
Oct  8 06:09:41 np0005475493 nova_compute[262220]: 2025-10-08 10:09:41.616 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  8 06:09:41 np0005475493 nova_compute[262220]: 2025-10-08 10:09:41.617 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  8 06:09:41 np0005475493 nova_compute[262220]: 2025-10-08 10:09:41.617 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No VIF found with MAC fa:16:3e:9d:d1:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  8 06:09:41 np0005475493 nova_compute[262220]: 2025-10-08 10:09:41.618 2 INFO nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Using config drive#033[00m
Oct  8 06:09:41 np0005475493 nova_compute[262220]: 2025-10-08 10:09:41.640 2 DEBUG nova.storage.rbd_utils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image f49b788e-70d1-4bc2-9f90-381017f2b232_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:09:41 np0005475493 podman[267628]: 2025-10-08 10:09:41.91131932 +0000 UTC m=+0.073869297 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Oct  8 06:09:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94002090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v752: 353 pgs: 353 active+clean; 74 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.5 MiB/s wr, 40 op/s
Oct  8 06:09:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:42.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:42 np0005475493 nova_compute[262220]: 2025-10-08 10:09:42.402 2 INFO nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Creating config drive at /var/lib/nova/instances/f49b788e-70d1-4bc2-9f90-381017f2b232/disk.config#033[00m
Oct  8 06:09:42 np0005475493 nova_compute[262220]: 2025-10-08 10:09:42.407 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f49b788e-70d1-4bc2-9f90-381017f2b232/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6xrqpvmg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:09:42 np0005475493 nova_compute[262220]: 2025-10-08 10:09:42.543 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f49b788e-70d1-4bc2-9f90-381017f2b232/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6xrqpvmg" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:09:42 np0005475493 nova_compute[262220]: 2025-10-08 10:09:42.573 2 DEBUG nova.storage.rbd_utils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image f49b788e-70d1-4bc2-9f90-381017f2b232_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:09:42 np0005475493 nova_compute[262220]: 2025-10-08 10:09:42.576 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f49b788e-70d1-4bc2-9f90-381017f2b232/disk.config f49b788e-70d1-4bc2-9f90-381017f2b232_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:09:42 np0005475493 nova_compute[262220]: 2025-10-08 10:09:42.721 2 DEBUG oslo_concurrency.processutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f49b788e-70d1-4bc2-9f90-381017f2b232/disk.config f49b788e-70d1-4bc2-9f90-381017f2b232_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:09:42 np0005475493 nova_compute[262220]: 2025-10-08 10:09:42.722 2 INFO nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Deleting local config drive /var/lib/nova/instances/f49b788e-70d1-4bc2-9f90-381017f2b232/disk.config because it was imported into RBD.#033[00m
Oct  8 06:09:42 np0005475493 systemd[1]: Starting libvirt secret daemon...
Oct  8 06:09:42 np0005475493 nova_compute[262220]: 2025-10-08 10:09:42.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:09:42 np0005475493 systemd[1]: Started libvirt secret daemon.
Oct  8 06:09:42 np0005475493 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct  8 06:09:42 np0005475493 NetworkManager[44872]: <info>  [1759918182.8152] manager: (tapd6bc221b-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Oct  8 06:09:42 np0005475493 kernel: tapd6bc221b-bf: entered promiscuous mode
Oct  8 06:09:42 np0005475493 ovn_controller[153187]: 2025-10-08T10:09:42Z|00027|binding|INFO|Claiming lport d6bc221b-bf28-4c61-b116-cd61209c7f31 for this chassis.
Oct  8 06:09:42 np0005475493 ovn_controller[153187]: 2025-10-08T10:09:42Z|00028|binding|INFO|d6bc221b-bf28-4c61-b116-cd61209c7f31: Claiming fa:16:3e:9d:d1:5c 10.100.0.6
Oct  8 06:09:42 np0005475493 nova_compute[262220]: 2025-10-08 10:09:42.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:09:42 np0005475493 nova_compute[262220]: 2025-10-08 10:09:42.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:09:42 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:42.836 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:d1:5c 10.100.0.6'], port_security=['fa:16:3e:9d:d1:5c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f49b788e-70d1-4bc2-9f90-381017f2b232', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5c6f88b-41ed-45ea-b491-931be9a75138', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1b714465-ebb6-4c8b-ab03-a9d6fbedd458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6475b99-4f25-4ccc-88e7-4eafaf6f3891, chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], logical_port=d6bc221b-bf28-4c61-b116-cd61209c7f31) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  8 06:09:42 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:42.837 163175 INFO neutron.agent.ovn.metadata.agent [-] Port d6bc221b-bf28-4c61-b116-cd61209c7f31 in datapath f5c6f88b-41ed-45ea-b491-931be9a75138 bound to our chassis#033[00m
Oct  8 06:09:42 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:42.838 163175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f5c6f88b-41ed-45ea-b491-931be9a75138#033[00m
Oct  8 06:09:42 np0005475493 systemd-udevd[267720]: Network interface NamePolicy= disabled on kernel command line.
Oct  8 06:09:42 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:42.840 163175 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmphu37rar1/privsep.sock']#033[00m
Oct  8 06:09:42 np0005475493 NetworkManager[44872]: <info>  [1759918182.8581] device (tapd6bc221b-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  8 06:09:42 np0005475493 NetworkManager[44872]: <info>  [1759918182.8590] device (tapd6bc221b-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  8 06:09:42 np0005475493 systemd-machined[216030]: New machine qemu-1-instance-00000001.
Oct  8 06:09:42 np0005475493 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Oct  8 06:09:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:42 np0005475493 nova_compute[262220]: 2025-10-08 10:09:42.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:09:42 np0005475493 ovn_controller[153187]: 2025-10-08T10:09:42Z|00029|binding|INFO|Setting lport d6bc221b-bf28-4c61-b116-cd61209c7f31 ovn-installed in OVS
Oct  8 06:09:42 np0005475493 ovn_controller[153187]: 2025-10-08T10:09:42Z|00030|binding|INFO|Setting lport d6bc221b-bf28-4c61-b116-cd61209c7f31 up in Southbound
Oct  8 06:09:42 np0005475493 nova_compute[262220]: 2025-10-08 10:09:42.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:09:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:43.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:43 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:43.562 163175 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  8 06:09:43 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:43.562 163175 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmphu37rar1/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  8 06:09:43 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:43.421 267781 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  8 06:09:43 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:43.426 267781 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  8 06:09:43 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:43.428 267781 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Oct  8 06:09:43 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:43.429 267781 INFO oslo.privsep.daemon [-] privsep daemon running as pid 267781#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.565 2 DEBUG nova.compute.manager [req-5b21bf1d-8b6a-411c-af27-a52abffd24eb req-30ed8aad-f45f-455b-a46f-c163b35ed074 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Received event network-vif-plugged-d6bc221b-bf28-4c61-b116-cd61209c7f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.565 2 DEBUG oslo_concurrency.lockutils [req-5b21bf1d-8b6a-411c-af27-a52abffd24eb req-30ed8aad-f45f-455b-a46f-c163b35ed074 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.565 2 DEBUG oslo_concurrency.lockutils [req-5b21bf1d-8b6a-411c-af27-a52abffd24eb req-30ed8aad-f45f-455b-a46f-c163b35ed074 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.565 2 DEBUG oslo_concurrency.lockutils [req-5b21bf1d-8b6a-411c-af27-a52abffd24eb req-30ed8aad-f45f-455b-a46f-c163b35ed074 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.566 2 DEBUG nova.compute.manager [req-5b21bf1d-8b6a-411c-af27-a52abffd24eb req-30ed8aad-f45f-455b-a46f-c163b35ed074 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Processing event network-vif-plugged-d6bc221b-bf28-4c61-b116-cd61209c7f31 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  8 06:09:43 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:43.566 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[27e84b03-ab95-46e7-94e6-cdde1d3fdc38]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.821 2 DEBUG nova.virt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Emitting event <LifecycleEvent: 1759918183.8208282, f49b788e-70d1-4bc2-9f90-381017f2b232 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.822 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] VM Started (Lifecycle Event)#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.826 2 DEBUG nova.compute.manager [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.839 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.842 2 INFO nova.virt.libvirt.driver [-] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Instance spawned successfully.#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.842 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.865 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.872 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.872 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.873 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.873 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.874 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.874 2 DEBUG nova.virt.libvirt.driver [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.877 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.905 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.906 2 DEBUG nova.virt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Emitting event <LifecycleEvent: 1759918183.8219543, f49b788e-70d1-4bc2-9f90-381017f2b232 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.906 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] VM Paused (Lifecycle Event)#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.933 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.936 2 DEBUG nova.virt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Emitting event <LifecycleEvent: 1759918183.8379052, f49b788e-70d1-4bc2-9f90-381017f2b232 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.937 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] VM Resumed (Lifecycle Event)#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.942 2 INFO nova.compute.manager [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Took 9.83 seconds to spawn the instance on the hypervisor.#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.943 2 DEBUG nova.compute.manager [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.955 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  8 06:09:43 np0005475493 nova_compute[262220]: 2025-10-08 10:09:43.960 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  8 06:09:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:44 np0005475493 nova_compute[262220]: 2025-10-08 10:09:44.024 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  8 06:09:44 np0005475493 nova_compute[262220]: 2025-10-08 10:09:44.044 2 INFO nova.compute.manager [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Took 10.82 seconds to build instance.#033[00m
Oct  8 06:09:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:09:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Oct  8 06:09:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Oct  8 06:09:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v754: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.3 MiB/s wr, 80 op/s
Oct  8 06:09:44 np0005475493 ceph-mon[73572]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Oct  8 06:09:44 np0005475493 nova_compute[262220]: 2025-10-08 10:09:44.113 2 DEBUG oslo_concurrency.lockutils [None req-00af0c48-a270-40be-9625-b5515367ace7 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:09:44 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:44.263 267781 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:09:44 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:44.263 267781 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:09:44 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:44.263 267781 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:09:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:44.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:44 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:44.993 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[55e29e44-43e9-4307-9609-e1b444c9bdc9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:09:44 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:44.995 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf5c6f88b-41 in ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  8 06:09:44 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:44.997 267781 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf5c6f88b-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  8 06:09:44 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:44.997 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[7702c2da-1e52-48ee-a58c-d1e43bd2f872]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:09:45 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:45.000 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[23cc25fd-696f-4649-b088-63f0487f02c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:09:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 06:09:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:45.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 06:09:45 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:45.022 163290 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf58830-d389-4c57-b280-b1e8f94041d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:09:45 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:45.051 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[2165f542-8262-4264-a665-3410dc043bca]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:09:45 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:45.053 163175 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpwyl_6z3j/privsep.sock']#033[00m
Oct  8 06:09:45 np0005475493 nova_compute[262220]: 2025-10-08 10:09:45.655 2 DEBUG nova.compute.manager [req-5070819c-8f79-4f4c-b487-e46479b66067 req-0ded576d-7c5e-42ea-9d33-4de13ac216ac 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Received event network-vif-plugged-d6bc221b-bf28-4c61-b116-cd61209c7f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:09:45 np0005475493 nova_compute[262220]: 2025-10-08 10:09:45.655 2 DEBUG oslo_concurrency.lockutils [req-5070819c-8f79-4f4c-b487-e46479b66067 req-0ded576d-7c5e-42ea-9d33-4de13ac216ac 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:09:45 np0005475493 nova_compute[262220]: 2025-10-08 10:09:45.655 2 DEBUG oslo_concurrency.lockutils [req-5070819c-8f79-4f4c-b487-e46479b66067 req-0ded576d-7c5e-42ea-9d33-4de13ac216ac 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:09:45 np0005475493 nova_compute[262220]: 2025-10-08 10:09:45.656 2 DEBUG oslo_concurrency.lockutils [req-5070819c-8f79-4f4c-b487-e46479b66067 req-0ded576d-7c5e-42ea-9d33-4de13ac216ac 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:09:45 np0005475493 nova_compute[262220]: 2025-10-08 10:09:45.656 2 DEBUG nova.compute.manager [req-5070819c-8f79-4f4c-b487-e46479b66067 req-0ded576d-7c5e-42ea-9d33-4de13ac216ac 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] No waiting events found dispatching network-vif-plugged-d6bc221b-bf28-4c61-b116-cd61209c7f31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  8 06:09:45 np0005475493 nova_compute[262220]: 2025-10-08 10:09:45.656 2 WARNING nova.compute.manager [req-5070819c-8f79-4f4c-b487-e46479b66067 req-0ded576d-7c5e-42ea-9d33-4de13ac216ac 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Received unexpected event network-vif-plugged-d6bc221b-bf28-4c61-b116-cd61209c7f31 for instance with vm_state active and task_state None.#033[00m
Oct  8 06:09:45 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:45.718 163175 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  8 06:09:45 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:45.719 163175 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpwyl_6z3j/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  8 06:09:45 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:45.578 267799 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  8 06:09:45 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:45.582 267799 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  8 06:09:45 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:45.584 267799 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct  8 06:09:45 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:45.585 267799 INFO oslo.privsep.daemon [-] privsep daemon running as pid 267799#033[00m
Oct  8 06:09:45 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:45.721 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce0c1be-5fd7-4410-b3c5-8242269f54f0]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:09:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:09:45] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Oct  8 06:09:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:09:45] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Oct  8 06:09:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v755: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 65 op/s
Oct  8 06:09:46 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:46.231 267799 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:09:46 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:46.231 267799 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:09:46 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:46.231 267799 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:09:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:46.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:46 np0005475493 nova_compute[262220]: 2025-10-08 10:09:46.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:09:46 np0005475493 NetworkManager[44872]: <info>  [1759918186.5625] manager: (patch-br-int-to-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27)
Oct  8 06:09:46 np0005475493 NetworkManager[44872]: <info>  [1759918186.5630] device (patch-br-int-to-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  8 06:09:46 np0005475493 NetworkManager[44872]: <info>  [1759918186.5637] manager: (patch-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/28)
Oct  8 06:09:46 np0005475493 NetworkManager[44872]: <info>  [1759918186.5639] device (patch-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  8 06:09:46 np0005475493 nova_compute[262220]: 2025-10-08 10:09:46.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:09:46 np0005475493 NetworkManager[44872]: <info>  [1759918186.5645] manager: (patch-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Oct  8 06:09:46 np0005475493 NetworkManager[44872]: <info>  [1759918186.5649] manager: (patch-br-int-to-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Oct  8 06:09:46 np0005475493 NetworkManager[44872]: <info>  [1759918186.5652] device (patch-br-int-to-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  8 06:09:46 np0005475493 NetworkManager[44872]: <info>  [1759918186.5654] device (patch-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  8 06:09:46 np0005475493 nova_compute[262220]: 2025-10-08 10:09:46.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:09:46 np0005475493 nova_compute[262220]: 2025-10-08 10:09:46.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:09:46 np0005475493 nova_compute[262220]: 2025-10-08 10:09:46.805 2 DEBUG nova.compute.manager [req-5dd18c9b-6d2c-4533-8cf0-3812628e4abe req-fc7c94ee-8b91-44f5-879e-e6492d5dd02d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Received event network-changed-d6bc221b-bf28-4c61-b116-cd61209c7f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:09:46 np0005475493 nova_compute[262220]: 2025-10-08 10:09:46.805 2 DEBUG nova.compute.manager [req-5dd18c9b-6d2c-4533-8cf0-3812628e4abe req-fc7c94ee-8b91-44f5-879e-e6492d5dd02d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Refreshing instance network info cache due to event network-changed-d6bc221b-bf28-4c61-b116-cd61209c7f31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  8 06:09:46 np0005475493 nova_compute[262220]: 2025-10-08 10:09:46.806 2 DEBUG oslo_concurrency.lockutils [req-5dd18c9b-6d2c-4533-8cf0-3812628e4abe req-fc7c94ee-8b91-44f5-879e-e6492d5dd02d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:09:46 np0005475493 nova_compute[262220]: 2025-10-08 10:09:46.806 2 DEBUG oslo_concurrency.lockutils [req-5dd18c9b-6d2c-4533-8cf0-3812628e4abe req-fc7c94ee-8b91-44f5-879e-e6492d5dd02d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:09:46 np0005475493 nova_compute[262220]: 2025-10-08 10:09:46.806 2 DEBUG nova.network.neutron [req-5dd18c9b-6d2c-4533-8cf0-3812628e4abe req-fc7c94ee-8b91-44f5-879e-e6492d5dd02d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Refreshing network info cache for port d6bc221b-bf28-4c61-b116-cd61209c7f31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  8 06:09:46 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:46.851 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[ac37b49d-2dc3-4f27-94df-f62f0b8236ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:09:46 np0005475493 NetworkManager[44872]: <info>  [1759918186.8627] manager: (tapf5c6f88b-40): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Oct  8 06:09:46 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:46.863 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[d7e438d7-548e-4aa1-985e-0bb5c3813b3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:09:46 np0005475493 systemd-udevd[267836]: Network interface NamePolicy= disabled on kernel command line.
Oct  8 06:09:46 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:46.891 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[8459f282-79bb-476e-b7fe-62df0069282b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:09:46 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:46.894 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[6da6cfdf-2a04-47d0-b940-ac0d339c73ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:09:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:46 np0005475493 NetworkManager[44872]: <info>  [1759918186.9187] device (tapf5c6f88b-40): carrier: link connected
Oct  8 06:09:46 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:46.922 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[392ae52a-886a-416e-9bda-1c40a67cdc66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:09:46 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:46.940 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[17135495-2ad5-49d7-a551-546314e2dbaf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5c6f88b-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:9c:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414555, 'reachable_time': 36725, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267855, 'error': None, 'target': 'ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:09:46 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:46.956 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[8e6bf5c0-5fb8-4689-827c-cb50075cd81b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:9cfc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414555, 'tstamp': 414555}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267856, 'error': None, 'target': 'ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:09:46 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:46.971 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[c17937ff-cbbd-44bb-ba01-d271f9567e2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5c6f88b-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:9c:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414555, 'reachable_time': 36725, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267857, 'error': None, 'target': 'ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:09:46 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:46.998 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[e9e3de11-12a2-424f-a8f4-a20e1112c990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:09:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:47.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:47.050 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[0ee9428c-41f7-4552-90db-eb4f703afadf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:47.052 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5c6f88b-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:47.052 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:47.053 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5c6f88b-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:09:47 np0005475493 kernel: tapf5c6f88b-40: entered promiscuous mode
Oct  8 06:09:47 np0005475493 NetworkManager[44872]: <info>  [1759918187.0554] manager: (tapf5c6f88b-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Oct  8 06:09:47 np0005475493 nova_compute[262220]: 2025-10-08 10:09:47.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:47.059 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf5c6f88b-40, col_values=(('external_ids', {'iface-id': '950da3ad-35fb-4b98-a8cb-0ee192607b20'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:09:47 np0005475493 ovn_controller[153187]: 2025-10-08T10:09:47Z|00031|binding|INFO|Releasing lport 950da3ad-35fb-4b98-a8cb-0ee192607b20 from this chassis (sb_readonly=0)
Oct  8 06:09:47 np0005475493 nova_compute[262220]: 2025-10-08 10:09:47.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:09:47 np0005475493 nova_compute[262220]: 2025-10-08 10:09:47.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:47.074 163175 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f5c6f88b-41ed-45ea-b491-931be9a75138.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f5c6f88b-41ed-45ea-b491-931be9a75138.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:47.075 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[fc79314d-7fd6-4149-9681-fabdd4d1f994]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:47.076 163175 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]: global
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]:    log         /dev/log local0 debug
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]:    log-tag     haproxy-metadata-proxy-f5c6f88b-41ed-45ea-b491-931be9a75138
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]:    user        root
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]:    group       root
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]:    maxconn     1024
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]:    pidfile     /var/lib/neutron/external/pids/f5c6f88b-41ed-45ea-b491-931be9a75138.pid.haproxy
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]:    daemon
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]: 
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]: defaults
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]:    log global
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]:    mode http
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]:    option httplog
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]:    option dontlognull
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]:    option http-server-close
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]:    option forwardfor
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]:    retries                 3
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]:    timeout http-request    30s
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]:    timeout connect         30s
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]:    timeout client          32s
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]:    timeout server          32s
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]:    timeout http-keep-alive 30s
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]: 
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]: 
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]: listen listener
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]:    bind 169.254.169.254:80
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]:    server metadata /var/lib/neutron/metadata_proxy
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]:    http-request add-header X-OVN-Network-ID f5c6f88b-41ed-45ea-b491-931be9a75138
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  8 06:09:47 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:47.078 163175 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138', 'env', 'PROCESS_TAG=haproxy-f5c6f88b-41ed-45ea-b491-931be9a75138', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f5c6f88b-41ed-45ea-b491-931be9a75138.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  8 06:09:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:09:47.119Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:09:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:09:47.119Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:09:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:09:47.120Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:09:47 np0005475493 podman[267890]: 2025-10-08 10:09:47.446704938 +0000 UTC m=+0.052552377 container create 0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  8 06:09:47 np0005475493 systemd[1]: Started libpod-conmon-0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927.scope.
Oct  8 06:09:47 np0005475493 podman[267890]: 2025-10-08 10:09:47.420678764 +0000 UTC m=+0.026526233 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  8 06:09:47 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:09:47 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e3d1aa5320eb20e28cf9285cbf8434fde889ae25e1684b2e2a512764f7589a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  8 06:09:47 np0005475493 podman[267890]: 2025-10-08 10:09:47.55147187 +0000 UTC m=+0.157319339 container init 0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  8 06:09:47 np0005475493 podman[267890]: 2025-10-08 10:09:47.557134066 +0000 UTC m=+0.162981505 container start 0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  8 06:09:47 np0005475493 neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138[267905]: [NOTICE]   (267909) : New worker (267911) forked
Oct  8 06:09:47 np0005475493 neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138[267905]: [NOTICE]   (267909) : Loading success.
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:09:47
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['.nfs', '.mgr', 'volumes', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'backups']
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:09:47 np0005475493 nova_compute[262220]: 2025-10-08 10:09:47.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:09:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:09:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:09:47 np0005475493 nova_compute[262220]: 2025-10-08 10:09:47.897 2 DEBUG nova.network.neutron [req-5dd18c9b-6d2c-4533-8cf0-3812628e4abe req-fc7c94ee-8b91-44f5-879e-e6492d5dd02d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Updated VIF entry in instance network info cache for port d6bc221b-bf28-4c61-b116-cd61209c7f31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  8 06:09:47 np0005475493 nova_compute[262220]: 2025-10-08 10:09:47.897 2 DEBUG nova.network.neutron [req-5dd18c9b-6d2c-4533-8cf0-3812628e4abe req-fc7c94ee-8b91-44f5-879e-e6492d5dd02d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Updating instance_info_cache with network_info: [{"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:09:47 np0005475493 nova_compute[262220]: 2025-10-08 10:09:47.921 2 DEBUG oslo_concurrency.lockutils [req-5dd18c9b-6d2c-4533-8cf0-3812628e4abe req-fc7c94ee-8b91-44f5-879e-e6492d5dd02d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:09:47 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:09:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:09:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:09:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:09:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:09:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:09:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v756: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 55 op/s
Oct  8 06:09:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:09:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:09:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:09:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:09:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:09:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:09:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:09:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:09:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:09:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:48.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:49.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.081565) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918189081617, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 473, "num_deletes": 258, "total_data_size": 422347, "memory_usage": 431576, "flush_reason": "Manual Compaction"}
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918189105997, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 418228, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23362, "largest_seqno": 23834, "table_properties": {"data_size": 415596, "index_size": 668, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6267, "raw_average_key_size": 17, "raw_value_size": 410117, "raw_average_value_size": 1161, "num_data_blocks": 30, "num_entries": 353, "num_filter_entries": 353, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759918169, "oldest_key_time": 1759918169, "file_creation_time": 1759918189, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 25019 microseconds, and 3482 cpu microseconds.
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.106586) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 418228 bytes OK
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.106604) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.113082) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.113100) EVENT_LOG_v1 {"time_micros": 1759918189113095, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.113117) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 419510, prev total WAL file size 419510, number of live WAL files 2.
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.113585) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353036' seq:0, type:0; will stop at (end)
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(408KB)], [50(12MB)]
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918189113621, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 13201287, "oldest_snapshot_seqno": -1}
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5356 keys, 13082972 bytes, temperature: kUnknown
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918189280650, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 13082972, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13047165, "index_size": 21297, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 137022, "raw_average_key_size": 25, "raw_value_size": 12950191, "raw_average_value_size": 2417, "num_data_blocks": 866, "num_entries": 5356, "num_filter_entries": 5356, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759918189, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.280886) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13082972 bytes
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.299203) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 79.0 rd, 78.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 12.2 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(62.8) write-amplify(31.3) OK, records in: 5884, records dropped: 528 output_compression: NoCompression
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.299238) EVENT_LOG_v1 {"time_micros": 1759918189299225, "job": 26, "event": "compaction_finished", "compaction_time_micros": 167094, "compaction_time_cpu_micros": 25841, "output_level": 6, "num_output_files": 1, "total_output_size": 13082972, "num_input_records": 5884, "num_output_records": 5356, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918189299469, "job": 26, "event": "table_file_deletion", "file_number": 52}
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918189301647, "job": 26, "event": "table_file_deletion", "file_number": 50}
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.113495) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.301674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.301690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.301692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.301694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:09:49.301695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:09:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:09:49 np0005475493 podman[268096]: 2025-10-08 10:09:49.929312022 +0000 UTC m=+0.104923987 container create c1efa525557cd209242c0a911ee54c8fb03bea77cd1621bbcecc7521e8c1fe30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curran, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  8 06:09:49 np0005475493 podman[268096]: 2025-10-08 10:09:49.846807712 +0000 UTC m=+0.022419697 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:09:49 np0005475493 systemd[1]: Started libpod-conmon-c1efa525557cd209242c0a911ee54c8fb03bea77cd1621bbcecc7521e8c1fe30.scope.
Oct  8 06:09:50 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:09:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:50 np0005475493 podman[268096]: 2025-10-08 10:09:50.066801749 +0000 UTC m=+0.242413734 container init c1efa525557cd209242c0a911ee54c8fb03bea77cd1621bbcecc7521e8c1fe30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curran, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:09:50 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v757: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 941 KiB/s wr, 97 op/s
Oct  8 06:09:50 np0005475493 podman[268096]: 2025-10-08 10:09:50.074815753 +0000 UTC m=+0.250427718 container start c1efa525557cd209242c0a911ee54c8fb03bea77cd1621bbcecc7521e8c1fe30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Oct  8 06:09:50 np0005475493 gifted_curran[268113]: 167 167
Oct  8 06:09:50 np0005475493 systemd[1]: libpod-c1efa525557cd209242c0a911ee54c8fb03bea77cd1621bbcecc7521e8c1fe30.scope: Deactivated successfully.
Oct  8 06:09:50 np0005475493 podman[268096]: 2025-10-08 10:09:50.09970399 +0000 UTC m=+0.275315975 container attach c1efa525557cd209242c0a911ee54c8fb03bea77cd1621bbcecc7521e8c1fe30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curran, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct  8 06:09:50 np0005475493 podman[268096]: 2025-10-08 10:09:50.100778285 +0000 UTC m=+0.276390250 container died c1efa525557cd209242c0a911ee54c8fb03bea77cd1621bbcecc7521e8c1fe30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curran, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325)
Oct  8 06:09:50 np0005475493 systemd[1]: var-lib-containers-storage-overlay-4bff23ae85cadcea67b9a8fd01ae06d0e9cd13e2ed00f635413abbf389a1a1b2-merged.mount: Deactivated successfully.
Oct  8 06:09:50 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:09:50 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:09:50 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:09:50 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:09:50 np0005475493 podman[268096]: 2025-10-08 10:09:50.330318126 +0000 UTC m=+0.505930091 container remove c1efa525557cd209242c0a911ee54c8fb03bea77cd1621bbcecc7521e8c1fe30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  8 06:09:50 np0005475493 systemd[1]: libpod-conmon-c1efa525557cd209242c0a911ee54c8fb03bea77cd1621bbcecc7521e8c1fe30.scope: Deactivated successfully.
Oct  8 06:09:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:50.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:50 np0005475493 podman[268138]: 2025-10-08 10:09:50.5136892 +0000 UTC m=+0.039284792 container create a43ae329c53cb7057cd460b51d7b809a0b2b094e572e24c182a6ee5adb0f6cdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  8 06:09:50 np0005475493 systemd[1]: Started libpod-conmon-a43ae329c53cb7057cd460b51d7b809a0b2b094e572e24c182a6ee5adb0f6cdb.scope.
Oct  8 06:09:50 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:09:50 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133968619cb2e81dfd4dc97c343e54606a44d5bc2293af7e90238296ba216b9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:09:50 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133968619cb2e81dfd4dc97c343e54606a44d5bc2293af7e90238296ba216b9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:09:50 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133968619cb2e81dfd4dc97c343e54606a44d5bc2293af7e90238296ba216b9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:09:50 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133968619cb2e81dfd4dc97c343e54606a44d5bc2293af7e90238296ba216b9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:09:50 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133968619cb2e81dfd4dc97c343e54606a44d5bc2293af7e90238296ba216b9a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:09:50 np0005475493 podman[268138]: 2025-10-08 10:09:50.589755439 +0000 UTC m=+0.115351051 container init a43ae329c53cb7057cd460b51d7b809a0b2b094e572e24c182a6ee5adb0f6cdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_moser, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  8 06:09:50 np0005475493 podman[268138]: 2025-10-08 10:09:50.496776434 +0000 UTC m=+0.022372046 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:09:50 np0005475493 podman[268138]: 2025-10-08 10:09:50.602104404 +0000 UTC m=+0.127699996 container start a43ae329c53cb7057cd460b51d7b809a0b2b094e572e24c182a6ee5adb0f6cdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_moser, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  8 06:09:50 np0005475493 podman[268138]: 2025-10-08 10:09:50.60502258 +0000 UTC m=+0.130618172 container attach a43ae329c53cb7057cd460b51d7b809a0b2b094e572e24c182a6ee5adb0f6cdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_moser, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  8 06:09:50 np0005475493 hungry_moser[268155]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:09:50 np0005475493 hungry_moser[268155]: --> All data devices are unavailable
Oct  8 06:09:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:50 np0005475493 systemd[1]: libpod-a43ae329c53cb7057cd460b51d7b809a0b2b094e572e24c182a6ee5adb0f6cdb.scope: Deactivated successfully.
Oct  8 06:09:50 np0005475493 podman[268138]: 2025-10-08 10:09:50.930846024 +0000 UTC m=+0.456441616 container died a43ae329c53cb7057cd460b51d7b809a0b2b094e572e24c182a6ee5adb0f6cdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  8 06:09:50 np0005475493 systemd[1]: var-lib-containers-storage-overlay-133968619cb2e81dfd4dc97c343e54606a44d5bc2293af7e90238296ba216b9a-merged.mount: Deactivated successfully.
Oct  8 06:09:50 np0005475493 podman[268138]: 2025-10-08 10:09:50.970914309 +0000 UTC m=+0.496509901 container remove a43ae329c53cb7057cd460b51d7b809a0b2b094e572e24c182a6ee5adb0f6cdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_moser, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct  8 06:09:50 np0005475493 systemd[1]: libpod-conmon-a43ae329c53cb7057cd460b51d7b809a0b2b094e572e24c182a6ee5adb0f6cdb.scope: Deactivated successfully.
Oct  8 06:09:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:09:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:51.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:09:51 np0005475493 nova_compute[262220]: 2025-10-08 10:09:51.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:09:51 np0005475493 podman[268272]: 2025-10-08 10:09:51.614954146 +0000 UTC m=+0.044211473 container create 4ff2a80642af05d532207a9a0f8b9a5990f0276d526fd6ceacc6895f82d8da0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_curran, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  8 06:09:51 np0005475493 systemd[1]: Started libpod-conmon-4ff2a80642af05d532207a9a0f8b9a5990f0276d526fd6ceacc6895f82d8da0c.scope.
Oct  8 06:09:51 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:09:51 np0005475493 podman[268272]: 2025-10-08 10:09:51.595923371 +0000 UTC m=+0.025180728 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:09:51 np0005475493 podman[268272]: 2025-10-08 10:09:51.694992486 +0000 UTC m=+0.124249833 container init 4ff2a80642af05d532207a9a0f8b9a5990f0276d526fd6ceacc6895f82d8da0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_curran, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  8 06:09:51 np0005475493 podman[268272]: 2025-10-08 10:09:51.701748207 +0000 UTC m=+0.131005534 container start 4ff2a80642af05d532207a9a0f8b9a5990f0276d526fd6ceacc6895f82d8da0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:09:51 np0005475493 podman[268272]: 2025-10-08 10:09:51.70485668 +0000 UTC m=+0.134114027 container attach 4ff2a80642af05d532207a9a0f8b9a5990f0276d526fd6ceacc6895f82d8da0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_curran, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  8 06:09:51 np0005475493 hardcore_curran[268289]: 167 167
Oct  8 06:09:51 np0005475493 systemd[1]: libpod-4ff2a80642af05d532207a9a0f8b9a5990f0276d526fd6ceacc6895f82d8da0c.scope: Deactivated successfully.
Oct  8 06:09:51 np0005475493 podman[268272]: 2025-10-08 10:09:51.707265349 +0000 UTC m=+0.136522686 container died 4ff2a80642af05d532207a9a0f8b9a5990f0276d526fd6ceacc6895f82d8da0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_curran, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Oct  8 06:09:51 np0005475493 systemd[1]: var-lib-containers-storage-overlay-d6d1b3449f77e13c23d7dc8481df2b3987932b3962fe1fe8be3ff0b39f2d5fdc-merged.mount: Deactivated successfully.
Oct  8 06:09:51 np0005475493 podman[268272]: 2025-10-08 10:09:51.747446419 +0000 UTC m=+0.176703746 container remove 4ff2a80642af05d532207a9a0f8b9a5990f0276d526fd6ceacc6895f82d8da0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_curran, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:09:51 np0005475493 systemd[1]: libpod-conmon-4ff2a80642af05d532207a9a0f8b9a5990f0276d526fd6ceacc6895f82d8da0c.scope: Deactivated successfully.
Oct  8 06:09:51 np0005475493 podman[268312]: 2025-10-08 10:09:51.93438824 +0000 UTC m=+0.053851250 container create 7fc0bcea39185536385cf4c952793d8f75f2b3d9cf6becb255ec1e708566f31e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_curran, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  8 06:09:51 np0005475493 systemd[1]: Started libpod-conmon-7fc0bcea39185536385cf4c952793d8f75f2b3d9cf6becb255ec1e708566f31e.scope.
Oct  8 06:09:51 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:09:51 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9037dea98a69426fc1ab52681638b3fcff3f5a41f16d2bce1ed25ba0f93604d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:09:51 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9037dea98a69426fc1ab52681638b3fcff3f5a41f16d2bce1ed25ba0f93604d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:09:51 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9037dea98a69426fc1ab52681638b3fcff3f5a41f16d2bce1ed25ba0f93604d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:09:51 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9037dea98a69426fc1ab52681638b3fcff3f5a41f16d2bce1ed25ba0f93604d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:09:51 np0005475493 podman[268312]: 2025-10-08 10:09:51.905001525 +0000 UTC m=+0.024464565 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:09:52 np0005475493 podman[268312]: 2025-10-08 10:09:52.011330318 +0000 UTC m=+0.130793348 container init 7fc0bcea39185536385cf4c952793d8f75f2b3d9cf6becb255ec1e708566f31e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_curran, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  8 06:09:52 np0005475493 podman[268312]: 2025-10-08 10:09:52.019584799 +0000 UTC m=+0.139047799 container start 7fc0bcea39185536385cf4c952793d8f75f2b3d9cf6becb255ec1e708566f31e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  8 06:09:52 np0005475493 podman[268312]: 2025-10-08 10:09:52.022808755 +0000 UTC m=+0.142271755 container attach 7fc0bcea39185536385cf4c952793d8f75f2b3d9cf6becb255ec1e708566f31e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  8 06:09:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:52 np0005475493 podman[268327]: 2025-10-08 10:09:52.072654202 +0000 UTC m=+0.097880846 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  8 06:09:52 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v758: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 941 KiB/s wr, 97 op/s
Oct  8 06:09:52 np0005475493 gracious_curran[268331]: {
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:    "1": [
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:        {
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:            "devices": [
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:                "/dev/loop3"
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:            ],
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:            "lv_name": "ceph_lv0",
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:            "lv_size": "21470642176",
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:            "name": "ceph_lv0",
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:            "tags": {
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:                "ceph.cluster_name": "ceph",
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:                "ceph.crush_device_class": "",
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:                "ceph.encrypted": "0",
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:                "ceph.osd_id": "1",
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:                "ceph.type": "block",
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:                "ceph.vdo": "0",
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:                "ceph.with_tpm": "0"
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:            },
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:            "type": "block",
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:            "vg_name": "ceph_vg0"
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:        }
Oct  8 06:09:52 np0005475493 gracious_curran[268331]:    ]
Oct  8 06:09:52 np0005475493 gracious_curran[268331]: }
Oct  8 06:09:52 np0005475493 systemd[1]: libpod-7fc0bcea39185536385cf4c952793d8f75f2b3d9cf6becb255ec1e708566f31e.scope: Deactivated successfully.
Oct  8 06:09:52 np0005475493 podman[268312]: 2025-10-08 10:09:52.318935362 +0000 UTC m=+0.438398372 container died 7fc0bcea39185536385cf4c952793d8f75f2b3d9cf6becb255ec1e708566f31e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  8 06:09:52 np0005475493 systemd[1]: var-lib-containers-storage-overlay-a9037dea98a69426fc1ab52681638b3fcff3f5a41f16d2bce1ed25ba0f93604d-merged.mount: Deactivated successfully.
Oct  8 06:09:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:52 np0005475493 podman[268312]: 2025-10-08 10:09:52.364164188 +0000 UTC m=+0.483627198 container remove 7fc0bcea39185536385cf4c952793d8f75f2b3d9cf6becb255ec1e708566f31e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct  8 06:09:52 np0005475493 systemd[1]: libpod-conmon-7fc0bcea39185536385cf4c952793d8f75f2b3d9cf6becb255ec1e708566f31e.scope: Deactivated successfully.
Oct  8 06:09:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:52.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:52 np0005475493 nova_compute[262220]: 2025-10-08 10:09:52.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:09:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:52 np0005475493 podman[268461]: 2025-10-08 10:09:52.942155825 +0000 UTC m=+0.039181568 container create f6cbce94631aeca3396cc20d60e6dc467b210da80c8c59964a9e63db7e209813 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_fermat, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:09:52 np0005475493 systemd[1]: Started libpod-conmon-f6cbce94631aeca3396cc20d60e6dc467b210da80c8c59964a9e63db7e209813.scope.
Oct  8 06:09:53 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:09:53 np0005475493 podman[268461]: 2025-10-08 10:09:52.924781395 +0000 UTC m=+0.021807148 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:09:53 np0005475493 podman[268461]: 2025-10-08 10:09:53.021627206 +0000 UTC m=+0.118652989 container init f6cbce94631aeca3396cc20d60e6dc467b210da80c8c59964a9e63db7e209813 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_fermat, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  8 06:09:53 np0005475493 podman[268461]: 2025-10-08 10:09:53.028623985 +0000 UTC m=+0.125649738 container start f6cbce94631aeca3396cc20d60e6dc467b210da80c8c59964a9e63db7e209813 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Oct  8 06:09:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:09:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:53.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:09:53 np0005475493 laughing_fermat[268478]: 167 167
Oct  8 06:09:53 np0005475493 systemd[1]: libpod-f6cbce94631aeca3396cc20d60e6dc467b210da80c8c59964a9e63db7e209813.scope: Deactivated successfully.
Oct  8 06:09:53 np0005475493 podman[268461]: 2025-10-08 10:09:53.036575777 +0000 UTC m=+0.133601560 container attach f6cbce94631aeca3396cc20d60e6dc467b210da80c8c59964a9e63db7e209813 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_fermat, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:09:53 np0005475493 podman[268461]: 2025-10-08 10:09:53.037453276 +0000 UTC m=+0.134479049 container died f6cbce94631aeca3396cc20d60e6dc467b210da80c8c59964a9e63db7e209813 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:09:53 np0005475493 systemd[1]: var-lib-containers-storage-overlay-6b915765bc985c3c203b4caf38d86323411ada4a3460db64142cb16d3b803c87-merged.mount: Deactivated successfully.
Oct  8 06:09:53 np0005475493 podman[268461]: 2025-10-08 10:09:53.226542358 +0000 UTC m=+0.323568111 container remove f6cbce94631aeca3396cc20d60e6dc467b210da80c8c59964a9e63db7e209813 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_fermat, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  8 06:09:53 np0005475493 systemd[1]: libpod-conmon-f6cbce94631aeca3396cc20d60e6dc467b210da80c8c59964a9e63db7e209813.scope: Deactivated successfully.
Oct  8 06:09:53 np0005475493 podman[268503]: 2025-10-08 10:09:53.415614558 +0000 UTC m=+0.062392341 container create c5f039d40878f5f7b0e9764c3ac991f4da466a451a82c6f19cbdde6aee2490b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  8 06:09:53 np0005475493 systemd[1]: Started libpod-conmon-c5f039d40878f5f7b0e9764c3ac991f4da466a451a82c6f19cbdde6aee2490b4.scope.
Oct  8 06:09:53 np0005475493 podman[268503]: 2025-10-08 10:09:53.378325033 +0000 UTC m=+0.025102836 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:09:53 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:09:53 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bac9c31dca9f09d701d3fff154e2bdf5fc803a9db2b6d38be32cdcb15ef5e3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:09:53 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bac9c31dca9f09d701d3fff154e2bdf5fc803a9db2b6d38be32cdcb15ef5e3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:09:53 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bac9c31dca9f09d701d3fff154e2bdf5fc803a9db2b6d38be32cdcb15ef5e3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:09:53 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bac9c31dca9f09d701d3fff154e2bdf5fc803a9db2b6d38be32cdcb15ef5e3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:09:53 np0005475493 podman[268503]: 2025-10-08 10:09:53.511187758 +0000 UTC m=+0.157965561 container init c5f039d40878f5f7b0e9764c3ac991f4da466a451a82c6f19cbdde6aee2490b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  8 06:09:53 np0005475493 podman[268503]: 2025-10-08 10:09:53.519543053 +0000 UTC m=+0.166320826 container start c5f039d40878f5f7b0e9764c3ac991f4da466a451a82c6f19cbdde6aee2490b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_blackwell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:09:53 np0005475493 podman[268503]: 2025-10-08 10:09:53.523088359 +0000 UTC m=+0.169866142 container attach c5f039d40878f5f7b0e9764c3ac991f4da466a451a82c6f19cbdde6aee2490b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_blackwell, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:09:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:09:54 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v759: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 76 op/s
Oct  8 06:09:54 np0005475493 lvm[268594]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:09:54 np0005475493 lvm[268594]: VG ceph_vg0 finished
Oct  8 06:09:54 np0005475493 hungry_blackwell[268519]: {}
Oct  8 06:09:54 np0005475493 systemd[1]: libpod-c5f039d40878f5f7b0e9764c3ac991f4da466a451a82c6f19cbdde6aee2490b4.scope: Deactivated successfully.
Oct  8 06:09:54 np0005475493 systemd[1]: libpod-c5f039d40878f5f7b0e9764c3ac991f4da466a451a82c6f19cbdde6aee2490b4.scope: Consumed 1.068s CPU time.
Oct  8 06:09:54 np0005475493 podman[268503]: 2025-10-08 10:09:54.248718967 +0000 UTC m=+0.895496770 container died c5f039d40878f5f7b0e9764c3ac991f4da466a451a82c6f19cbdde6aee2490b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  8 06:09:54 np0005475493 systemd[1]: var-lib-containers-storage-overlay-5bac9c31dca9f09d701d3fff154e2bdf5fc803a9db2b6d38be32cdcb15ef5e3c-merged.mount: Deactivated successfully.
Oct  8 06:09:54 np0005475493 podman[268503]: 2025-10-08 10:09:54.330986518 +0000 UTC m=+0.977764301 container remove c5f039d40878f5f7b0e9764c3ac991f4da466a451a82c6f19cbdde6aee2490b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:09:54 np0005475493 systemd[1]: libpod-conmon-c5f039d40878f5f7b0e9764c3ac991f4da466a451a82c6f19cbdde6aee2490b4.scope: Deactivated successfully.
Oct  8 06:09:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:09:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:54.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:09:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:09:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:09:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:55.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:55 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  8 06:09:55 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:09:55 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:09:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:09:55] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct  8 06:09:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:09:55] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct  8 06:09:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:56 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v760: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct  8 06:09:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:56.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:56 np0005475493 nova_compute[262220]: 2025-10-08 10:09:56.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:09:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 06:09:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:57.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 06:09:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:09:57.120Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:09:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:57.408 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:09:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:57.408 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:09:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:09:57.409 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:09:57 np0005475493 ovn_controller[153187]: 2025-10-08T10:09:57Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9d:d1:5c 10.100.0.6
Oct  8 06:09:57 np0005475493 ovn_controller[153187]: 2025-10-08T10:09:57Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:d1:5c 10.100.0.6
Oct  8 06:09:57 np0005475493 nova_compute[262220]: 2025-10-08 10:09:57.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:09:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94002980 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:58 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v761: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct  8 06:09:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:09:58.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:09:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:09:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:09:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:09:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:09:59.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:09:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:10:00 np0005475493 ceph-mon[73572]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct  8 06:10:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:00 np0005475493 ceph-mon[73572]: overall HEALTH_OK
Oct  8 06:10:00 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v762: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct  8 06:10:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94002980 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:00.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:01.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:01 np0005475493 nova_compute[262220]: 2025-10-08 10:10:01.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:02 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v763: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  8 06:10:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:02.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:02 np0005475493 nova_compute[262220]: 2025-10-08 10:10:02.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:10:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:10:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:10:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:03.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:10:03 np0005475493 nova_compute[262220]: 2025-10-08 10:10:03.459 2 INFO nova.compute.manager [None req-2cee94aa-4cf6-4621-b8e5-1fd66eab24e8 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Get console output#033[00m
Oct  8 06:10:03 np0005475493 nova_compute[262220]: 2025-10-08 10:10:03.465 2 INFO oslo.privsep.daemon [None req-2cee94aa-4cf6-4621-b8e5-1fd66eab24e8 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpyuw02l4u/privsep.sock']#033[00m
Oct  8 06:10:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:10:04 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v764: 353 pgs: 353 active+clean; 121 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  8 06:10:04 np0005475493 nova_compute[262220]: 2025-10-08 10:10:04.198 2 INFO oslo.privsep.daemon [None req-2cee94aa-4cf6-4621-b8e5-1fd66eab24e8 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct  8 06:10:04 np0005475493 nova_compute[262220]: 2025-10-08 10:10:04.053 631 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  8 06:10:04 np0005475493 nova_compute[262220]: 2025-10-08 10:10:04.057 631 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  8 06:10:04 np0005475493 nova_compute[262220]: 2025-10-08 10:10:04.059 631 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct  8 06:10:04 np0005475493 nova_compute[262220]: 2025-10-08 10:10:04.059 631 INFO oslo.privsep.daemon [-] privsep daemon running as pid 631#033[00m
Oct  8 06:10:04 np0005475493 nova_compute[262220]: 2025-10-08 10:10:04.306 631 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  8 06:10:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:04.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:04 np0005475493 podman[268654]: 2025-10-08 10:10:04.937373322 +0000 UTC m=+0.098661412 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Oct  8 06:10:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:05.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:10:05] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Oct  8 06:10:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:10:05] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Oct  8 06:10:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/101006 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 06:10:06 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v765: 353 pgs: 353 active+clean; 121 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  8 06:10:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:10:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:06.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:10:06 np0005475493 nova_compute[262220]: 2025-10-08 10:10:06.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:10:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:07.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:10:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:10:07.121Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:10:07 np0005475493 nova_compute[262220]: 2025-10-08 10:10:07.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:08 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v766: 353 pgs: 353 active+clean; 121 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  8 06:10:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:08.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003860 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:10:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:09.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:10:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:10:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:10 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v767: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  8 06:10:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:10.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90002490 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:11.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:11 np0005475493 nova_compute[262220]: 2025-10-08 10:10:11.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:11 np0005475493 podman[268713]: 2025-10-08 10:10:11.90282596 +0000 UTC m=+0.064258772 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  8 06:10:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v768: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct  8 06:10:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:12.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:12 np0005475493 nova_compute[262220]: 2025-10-08 10:10:12.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:12 np0005475493 podman[268735]: 2025-10-08 10:10:12.930083295 +0000 UTC m=+0.074624823 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  8 06:10:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:10:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:13.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:10:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90002490 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:10:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v769: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct  8 06:10:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:10:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:14.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:15.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:15 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:10:15.306 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  8 06:10:15 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:10:15.307 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  8 06:10:15 np0005475493 nova_compute[262220]: 2025-10-08 10:10:15.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:10:15] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Oct  8 06:10:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:10:15] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Oct  8 06:10:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v770: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 3.7 KiB/s wr, 1 op/s
Oct  8 06:10:16 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:10:16.309 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:10:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90002490 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:16.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:16 np0005475493 nova_compute[262220]: 2025-10-08 10:10:16.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:17.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:10:17.122Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:10:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:17 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:10:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:17 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:10:17 np0005475493 nova_compute[262220]: 2025-10-08 10:10:17.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:10:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:10:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:10:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:10:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:18 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v771: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 3.7 KiB/s wr, 1 op/s
Oct  8 06:10:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:10:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:10:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:10:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:10:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:18.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90002490 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:19.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:10:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/101019 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 06:10:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:20 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v772: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct  8 06:10:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:20.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:21.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:21 np0005475493 nova_compute[262220]: 2025-10-08 10:10:21.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90002490 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:22 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v773: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct  8 06:10:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:22.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:22 np0005475493 nova_compute[262220]: 2025-10-08 10:10:22.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:22 np0005475493 podman[268767]: 2025-10-08 10:10:22.900710392 +0000 UTC m=+0.057397936 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  8 06:10:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:23.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:10:24 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v774: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct  8 06:10:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900036f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:24.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:25.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:25 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:10:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:25 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:10:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:10:25] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Oct  8 06:10:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:10:25] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Oct  8 06:10:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v775: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct  8 06:10:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:26.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:26 np0005475493 nova_compute[262220]: 2025-10-08 10:10:26.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900036f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:10:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:27.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:10:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:10:27.123Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:10:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:10:27.123Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:10:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:10:27.124Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:10:27 np0005475493 nova_compute[262220]: 2025-10-08 10:10:27.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:28 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v776: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct  8 06:10:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 06:10:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:28.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:10:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:29.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:29 np0005475493 nova_compute[262220]: 2025-10-08 10:10:29.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900036f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:30 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v777: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Oct  8 06:10:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:30.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:31.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:10:31 np0005475493 nova_compute[262220]: 2025-10-08 10:10:31.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v778: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 76 op/s
Oct  8 06:10:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:32.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:32 np0005475493 nova_compute[262220]: 2025-10-08 10:10:32.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:10:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:10:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:33.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/101034 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 06:10:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:10:34 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v779: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 79 op/s
Oct  8 06:10:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:10:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:10:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:34.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:10:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:35.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:10:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:10:35] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Oct  8 06:10:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:10:35] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Oct  8 06:10:35 np0005475493 podman[268825]: 2025-10-08 10:10:35.952057691 +0000 UTC m=+0.112327431 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  8 06:10:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:36 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v780: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 4.7 KiB/s wr, 5 op/s
Oct  8 06:10:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:10:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:36.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:10:36 np0005475493 nova_compute[262220]: 2025-10-08 10:10:36.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:10:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:37.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:10:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:10:37.124Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:10:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:37 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  8 06:10:37 np0005475493 nova_compute[262220]: 2025-10-08 10:10:37.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:38 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v781: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 4.7 KiB/s wr, 5 op/s
Oct  8 06:10:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:10:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:38.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:10:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:10:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:39.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:39 np0005475493 nova_compute[262220]: 2025-10-08 10:10:39.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:10:39 np0005475493 nova_compute[262220]: 2025-10-08 10:10:39.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:10:39 np0005475493 nova_compute[262220]: 2025-10-08 10:10:39.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:10:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v782: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 376 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Oct  8 06:10:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:40.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:40 np0005475493 nova_compute[262220]: 2025-10-08 10:10:40.881 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:10:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:41 np0005475493 nova_compute[262220]: 2025-10-08 10:10:41.041 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:10:41 np0005475493 nova_compute[262220]: 2025-10-08 10:10:41.042 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:10:41 np0005475493 nova_compute[262220]: 2025-10-08 10:10:41.042 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  8 06:10:41 np0005475493 nova_compute[262220]: 2025-10-08 10:10:41.042 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:10:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:41.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:41 np0005475493 nova_compute[262220]: 2025-10-08 10:10:41.194 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:10:41 np0005475493 nova_compute[262220]: 2025-10-08 10:10:41.195 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:10:41 np0005475493 nova_compute[262220]: 2025-10-08 10:10:41.195 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:10:41 np0005475493 nova_compute[262220]: 2025-10-08 10:10:41.195 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:10:41 np0005475493 nova_compute[262220]: 2025-10-08 10:10:41.196 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:10:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/101041 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  8 06:10:41 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:10:41 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1976522226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:10:41 np0005475493 nova_compute[262220]: 2025-10-08 10:10:41.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:41 np0005475493 nova_compute[262220]: 2025-10-08 10:10:41.631 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:10:41 np0005475493 nova_compute[262220]: 2025-10-08 10:10:41.703 2 DEBUG nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  8 06:10:41 np0005475493 nova_compute[262220]: 2025-10-08 10:10:41.704 2 DEBUG nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  8 06:10:41 np0005475493 nova_compute[262220]: 2025-10-08 10:10:41.855 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:10:41 np0005475493 nova_compute[262220]: 2025-10-08 10:10:41.856 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4404MB free_disk=59.89728546142578GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:10:41 np0005475493 nova_compute[262220]: 2025-10-08 10:10:41.856 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:10:41 np0005475493 nova_compute[262220]: 2025-10-08 10:10:41.857 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:10:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v783: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 375 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct  8 06:10:42 np0005475493 nova_compute[262220]: 2025-10-08 10:10:42.197 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Instance f49b788e-70d1-4bc2-9f90-381017f2b232 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  8 06:10:42 np0005475493 nova_compute[262220]: 2025-10-08 10:10:42.197 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:10:42 np0005475493 nova_compute[262220]: 2025-10-08 10:10:42.197 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:10:42 np0005475493 nova_compute[262220]: 2025-10-08 10:10:42.231 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:10:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:42.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:10:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2791682770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:10:42 np0005475493 nova_compute[262220]: 2025-10-08 10:10:42.712 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:10:42 np0005475493 nova_compute[262220]: 2025-10-08 10:10:42.720 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:10:42 np0005475493 nova_compute[262220]: 2025-10-08 10:10:42.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:42 np0005475493 nova_compute[262220]: 2025-10-08 10:10:42.817 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:10:42 np0005475493 podman[268904]: 2025-10-08 10:10:42.894474032 +0000 UTC m=+0.056526397 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  8 06:10:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:42 np0005475493 nova_compute[262220]: 2025-10-08 10:10:42.958 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:10:42 np0005475493 nova_compute[262220]: 2025-10-08 10:10:42.958 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:10:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:43.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:43 np0005475493 nova_compute[262220]: 2025-10-08 10:10:43.803 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:10:43 np0005475493 nova_compute[262220]: 2025-10-08 10:10:43.803 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  8 06:10:43 np0005475493 nova_compute[262220]: 2025-10-08 10:10:43.803 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  8 06:10:43 np0005475493 podman[268925]: 2025-10-08 10:10:43.886958906 +0000 UTC m=+0.052348950 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  8 06:10:44 np0005475493 nova_compute[262220]: 2025-10-08 10:10:44.044 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:10:44 np0005475493 nova_compute[262220]: 2025-10-08 10:10:44.044 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquired lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:10:44 np0005475493 nova_compute[262220]: 2025-10-08 10:10:44.044 2 DEBUG nova.network.neutron [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  8 06:10:44 np0005475493 nova_compute[262220]: 2025-10-08 10:10:44.045 2 DEBUG nova.objects.instance [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f49b788e-70d1-4bc2-9f90-381017f2b232 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  8 06:10:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:10:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v784: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 375 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct  8 06:10:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:44.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca00020e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:45.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:45 np0005475493 nova_compute[262220]: 2025-10-08 10:10:45.618 2 DEBUG nova.network.neutron [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Updating instance_info_cache with network_info: [{"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:10:45 np0005475493 nova_compute[262220]: 2025-10-08 10:10:45.656 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Releasing lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:10:45 np0005475493 nova_compute[262220]: 2025-10-08 10:10:45.657 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  8 06:10:45 np0005475493 nova_compute[262220]: 2025-10-08 10:10:45.657 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:10:45 np0005475493 nova_compute[262220]: 2025-10-08 10:10:45.657 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:10:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:10:45] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Oct  8 06:10:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:10:45] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Oct  8 06:10:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v785: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 370 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct  8 06:10:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:46.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:46 np0005475493 nova_compute[262220]: 2025-10-08 10:10:46.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:47.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:10:47.126Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:10:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:10:47
Oct  8 06:10:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:10:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:10:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['volumes', '.nfs', 'vms', 'default.rgw.log', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'backups']
Oct  8 06:10:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:10:47 np0005475493 nova_compute[262220]: 2025-10-08 10:10:47.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:10:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:10:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:10:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015193727819561111 of space, bias 1.0, pg target 0.45581183458683333 quantized to 32 (current 32)
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:10:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca00020e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v786: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 370 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:10:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:10:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:48.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:10:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:49.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:50 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v787: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Oct  8 06:10:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca00020e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:50.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:51.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:51 np0005475493 nova_compute[262220]: 2025-10-08 10:10:51.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:52 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v788: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 17 KiB/s wr, 28 op/s
Oct  8 06:10:52 np0005475493 ovn_controller[153187]: 2025-10-08T10:10:52Z|00032|binding|INFO|Releasing lport 950da3ad-35fb-4b98-a8cb-0ee192607b20 from this chassis (sb_readonly=0)
Oct  8 06:10:52 np0005475493 nova_compute[262220]: 2025-10-08 10:10:52.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:52.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:52 np0005475493 nova_compute[262220]: 2025-10-08 10:10:52.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0002280 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:10:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:53.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:10:53 np0005475493 podman[268983]: 2025-10-08 10:10:53.910220827 +0000 UTC m=+0.075808232 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  8 06:10:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:10:54 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v789: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 17 KiB/s wr, 28 op/s
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.258 2 DEBUG nova.compute.manager [req-ffc5d50d-66c1-448e-8eb8-d0ec7d1a7a25 req-8cc5e77a-bcb3-4e18-bc87-70bb063bae12 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Received event network-changed-d6bc221b-bf28-4c61-b116-cd61209c7f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.258 2 DEBUG nova.compute.manager [req-ffc5d50d-66c1-448e-8eb8-d0ec7d1a7a25 req-8cc5e77a-bcb3-4e18-bc87-70bb063bae12 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Refreshing instance network info cache due to event network-changed-d6bc221b-bf28-4c61-b116-cd61209c7f31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.258 2 DEBUG oslo_concurrency.lockutils [req-ffc5d50d-66c1-448e-8eb8-d0ec7d1a7a25 req-8cc5e77a-bcb3-4e18-bc87-70bb063bae12 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.258 2 DEBUG oslo_concurrency.lockutils [req-ffc5d50d-66c1-448e-8eb8-d0ec7d1a7a25 req-8cc5e77a-bcb3-4e18-bc87-70bb063bae12 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.259 2 DEBUG nova.network.neutron [req-ffc5d50d-66c1-448e-8eb8-d0ec7d1a7a25 req-8cc5e77a-bcb3-4e18-bc87-70bb063bae12 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Refreshing network info cache for port d6bc221b-bf28-4c61-b116-cd61209c7f31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.375 2 DEBUG oslo_concurrency.lockutils [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "f49b788e-70d1-4bc2-9f90-381017f2b232" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.375 2 DEBUG oslo_concurrency.lockutils [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.376 2 DEBUG oslo_concurrency.lockutils [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.376 2 DEBUG oslo_concurrency.lockutils [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.377 2 DEBUG oslo_concurrency.lockutils [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.378 2 INFO nova.compute.manager [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Terminating instance#033[00m
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.379 2 DEBUG nova.compute.manager [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  8 06:10:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:54 np0005475493 kernel: tapd6bc221b-bf (unregistering): left promiscuous mode
Oct  8 06:10:54 np0005475493 NetworkManager[44872]: <info>  [1759918254.4376] device (tapd6bc221b-bf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  8 06:10:54 np0005475493 ovn_controller[153187]: 2025-10-08T10:10:54Z|00033|binding|INFO|Releasing lport d6bc221b-bf28-4c61-b116-cd61209c7f31 from this chassis (sb_readonly=0)
Oct  8 06:10:54 np0005475493 ovn_controller[153187]: 2025-10-08T10:10:54Z|00034|binding|INFO|Setting lport d6bc221b-bf28-4c61-b116-cd61209c7f31 down in Southbound
Oct  8 06:10:54 np0005475493 ovn_controller[153187]: 2025-10-08T10:10:54Z|00035|binding|INFO|Removing iface tapd6bc221b-bf ovn-installed in OVS
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:54.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:54 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.488 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:d1:5c 10.100.0.6'], port_security=['fa:16:3e:9d:d1:5c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f49b788e-70d1-4bc2-9f90-381017f2b232', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5c6f88b-41ed-45ea-b491-931be9a75138', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1b714465-ebb6-4c8b-ab03-a9d6fbedd458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6475b99-4f25-4ccc-88e7-4eafaf6f3891, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], logical_port=d6bc221b-bf28-4c61-b116-cd61209c7f31) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  8 06:10:54 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.489 163175 INFO neutron.agent.ovn.metadata.agent [-] Port d6bc221b-bf28-4c61-b116-cd61209c7f31 in datapath f5c6f88b-41ed-45ea-b491-931be9a75138 unbound from our chassis#033[00m
Oct  8 06:10:54 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.490 163175 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f5c6f88b-41ed-45ea-b491-931be9a75138, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  8 06:10:54 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.491 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[ef9da7e1-229a-4e0e-994a-86f5a971ccd4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:10:54 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.491 163175 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138 namespace which is not needed anymore#033[00m
Oct  8 06:10:54 np0005475493 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Oct  8 06:10:54 np0005475493 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 15.732s CPU time.
Oct  8 06:10:54 np0005475493 systemd-machined[216030]: Machine qemu-1-instance-00000001 terminated.
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.609 2 INFO nova.virt.libvirt.driver [-] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Instance destroyed successfully.#033[00m
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.610 2 DEBUG nova.objects.instance [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'resources' on Instance uuid f49b788e-70d1-4bc2-9f90-381017f2b232 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  8 06:10:54 np0005475493 neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138[267905]: [NOTICE]   (267909) : haproxy version is 2.8.14-c23fe91
Oct  8 06:10:54 np0005475493 neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138[267905]: [NOTICE]   (267909) : path to executable is /usr/sbin/haproxy
Oct  8 06:10:54 np0005475493 neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138[267905]: [WARNING]  (267909) : Exiting Master process...
Oct  8 06:10:54 np0005475493 neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138[267905]: [WARNING]  (267909) : Exiting Master process...
Oct  8 06:10:54 np0005475493 neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138[267905]: [ALERT]    (267909) : Current worker (267911) exited with code 143 (Terminated)
Oct  8 06:10:54 np0005475493 neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138[267905]: [WARNING]  (267909) : All workers exited. Exiting... (0)
Oct  8 06:10:54 np0005475493 systemd[1]: libpod-0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927.scope: Deactivated successfully.
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.636 2 DEBUG nova.virt.libvirt.vif [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-08T10:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1358472667',display_name='tempest-TestNetworkBasicOps-server-1358472667',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1358472667',id=1,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGCqOiRkCvMZRP8fdEWleadJa9k0DhfKx++pZ4blF3y05LQ1KZbyE4MTPNAMp9BRrBdK92MH6DC+pII7aGjodGwK7AspsjQ0hDDswc17pIZ089tmxUxos+hWl7sAULow5Q==',key_name='tempest-TestNetworkBasicOps-1893605271',keypairs=<?>,launch_index=0,launched_at=2025-10-08T10:09:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-50tfjz8b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T10:09:44Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=f49b788e-70d1-4bc2-9f90-381017f2b232,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.637 2 DEBUG nova.network.os_vif_util [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  8 06:10:54 np0005475493 podman[269027]: 2025-10-08 10:10:54.638338455 +0000 UTC m=+0.054002435 container died 0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.638 2 DEBUG nova.network.os_vif_util [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:d1:5c,bridge_name='br-int',has_traffic_filtering=True,id=d6bc221b-bf28-4c61-b116-cd61209c7f31,network=Network(f5c6f88b-41ed-45ea-b491-931be9a75138),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6bc221b-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.639 2 DEBUG os_vif [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:d1:5c,bridge_name='br-int',has_traffic_filtering=True,id=d6bc221b-bf28-4c61-b116-cd61209c7f31,network=Network(f5c6f88b-41ed-45ea-b491-931be9a75138),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6bc221b-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.641 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6bc221b-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.648 2 INFO os_vif [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:d1:5c,bridge_name='br-int',has_traffic_filtering=True,id=d6bc221b-bf28-4c61-b116-cd61209c7f31,network=Network(f5c6f88b-41ed-45ea-b491-931be9a75138),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6bc221b-bf')#033[00m
Oct  8 06:10:54 np0005475493 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927-userdata-shm.mount: Deactivated successfully.
Oct  8 06:10:54 np0005475493 systemd[1]: var-lib-containers-storage-overlay-78e3d1aa5320eb20e28cf9285cbf8434fde889ae25e1684b2e2a512764f7589a-merged.mount: Deactivated successfully.
Oct  8 06:10:54 np0005475493 podman[269027]: 2025-10-08 10:10:54.680715747 +0000 UTC m=+0.096379727 container cleanup 0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:10:54 np0005475493 systemd[1]: libpod-conmon-0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927.scope: Deactivated successfully.
Oct  8 06:10:54 np0005475493 podman[269082]: 2025-10-08 10:10:54.752914039 +0000 UTC m=+0.049093304 container remove 0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  8 06:10:54 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.761 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[f0a6e149-6356-4dd0-8383-c488ce4c80a7]: (4, ('Wed Oct  8 10:10:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138 (0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927)\n0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927\nWed Oct  8 10:10:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138 (0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927)\n0efd5af81104fe944b7194eb793292499da3df8ecfcc2e35b2f5a1c79a3b1927\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:10:54 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.763 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[8f72415d-1bcd-451c-9ff9-41c4d7bac199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:10:54 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.764 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5c6f88b-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:10:54 np0005475493 kernel: tapf5c6f88b-40: left promiscuous mode
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:54 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.785 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[6e30ce52-dd99-4309-a361-eba3cbe77ce7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:54 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.818 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[1c01a8fa-07b5-408d-a924-8fb79bfc015e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:10:54 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.819 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[a0c53f0c-2b6c-4ac4-907e-d7340e130098]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:10:54 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.835 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[52fbd9a0-5f8d-4ba6-911f-e2f2cb6af048]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414547, 'reachable_time': 44869, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269148, 'error': None, 'target': 'ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:10:54 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.849 163290 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f5c6f88b-41ed-45ea-b491-931be9a75138 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  8 06:10:54 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:10:54.850 163290 DEBUG oslo.privsep.daemon [-] privsep: reply[4d5b25cf-149b-4c1b-8ba3-01c9b4f7def9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:10:54 np0005475493 systemd[1]: run-netns-ovnmeta\x2df5c6f88b\x2d41ed\x2d45ea\x2db491\x2d931be9a75138.mount: Deactivated successfully.
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.953 2 DEBUG nova.compute.manager [req-e5920ed0-6aca-4e87-8032-b4eb14cff0ee req-fd96448b-8ffd-474f-b491-fd71e51ba99d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Received event network-vif-unplugged-d6bc221b-bf28-4c61-b116-cd61209c7f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.953 2 DEBUG oslo_concurrency.lockutils [req-e5920ed0-6aca-4e87-8032-b4eb14cff0ee req-fd96448b-8ffd-474f-b491-fd71e51ba99d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.954 2 DEBUG oslo_concurrency.lockutils [req-e5920ed0-6aca-4e87-8032-b4eb14cff0ee req-fd96448b-8ffd-474f-b491-fd71e51ba99d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.954 2 DEBUG oslo_concurrency.lockutils [req-e5920ed0-6aca-4e87-8032-b4eb14cff0ee req-fd96448b-8ffd-474f-b491-fd71e51ba99d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.954 2 DEBUG nova.compute.manager [req-e5920ed0-6aca-4e87-8032-b4eb14cff0ee req-fd96448b-8ffd-474f-b491-fd71e51ba99d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] No waiting events found dispatching network-vif-unplugged-d6bc221b-bf28-4c61-b116-cd61209c7f31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  8 06:10:54 np0005475493 nova_compute[262220]: 2025-10-08 10:10:54.955 2 DEBUG nova.compute.manager [req-e5920ed0-6aca-4e87-8032-b4eb14cff0ee req-fd96448b-8ffd-474f-b491-fd71e51ba99d 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Received event network-vif-unplugged-d6bc221b-bf28-4c61-b116-cd61209c7f31 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  8 06:10:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:55 np0005475493 nova_compute[262220]: 2025-10-08 10:10:55.090 2 INFO nova.virt.libvirt.driver [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Deleting instance files /var/lib/nova/instances/f49b788e-70d1-4bc2-9f90-381017f2b232_del#033[00m
Oct  8 06:10:55 np0005475493 nova_compute[262220]: 2025-10-08 10:10:55.091 2 INFO nova.virt.libvirt.driver [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Deletion of /var/lib/nova/instances/f49b788e-70d1-4bc2-9f90-381017f2b232_del complete#033[00m
Oct  8 06:10:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:55.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:55 np0005475493 nova_compute[262220]: 2025-10-08 10:10:55.193 2 DEBUG nova.virt.libvirt.host [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Oct  8 06:10:55 np0005475493 nova_compute[262220]: 2025-10-08 10:10:55.194 2 INFO nova.virt.libvirt.host [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] UEFI support detected#033[00m
Oct  8 06:10:55 np0005475493 nova_compute[262220]: 2025-10-08 10:10:55.195 2 INFO nova.compute.manager [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Oct  8 06:10:55 np0005475493 nova_compute[262220]: 2025-10-08 10:10:55.196 2 DEBUG oslo.service.loopingcall [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  8 06:10:55 np0005475493 nova_compute[262220]: 2025-10-08 10:10:55.197 2 DEBUG nova.compute.manager [-] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  8 06:10:55 np0005475493 nova_compute[262220]: 2025-10-08 10:10:55.197 2 DEBUG nova.network.neutron [-] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  8 06:10:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:10:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:10:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:10:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:10:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:10:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:10:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:10:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:10:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:10:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:10:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:10:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:10:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:10:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:10:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:10:55] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Oct  8 06:10:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:10:55] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Oct  8 06:10:56 np0005475493 podman[269277]: 2025-10-08 10:10:56.041513089 +0000 UTC m=+0.040600694 container create 2d9a588ca6e45ce40b251ed01b9690c9c79d82cdf9a25ac2637bf79039340172 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:10:56 np0005475493 systemd[1]: Started libpod-conmon-2d9a588ca6e45ce40b251ed01b9690c9c79d82cdf9a25ac2637bf79039340172.scope.
Oct  8 06:10:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca00022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:56 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v790: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct  8 06:10:56 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:10:56 np0005475493 podman[269277]: 2025-10-08 10:10:56.026081523 +0000 UTC m=+0.025169148 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:10:56 np0005475493 podman[269277]: 2025-10-08 10:10:56.122620604 +0000 UTC m=+0.121708219 container init 2d9a588ca6e45ce40b251ed01b9690c9c79d82cdf9a25ac2637bf79039340172 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  8 06:10:56 np0005475493 podman[269277]: 2025-10-08 10:10:56.13132687 +0000 UTC m=+0.130414465 container start 2d9a588ca6e45ce40b251ed01b9690c9c79d82cdf9a25ac2637bf79039340172 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  8 06:10:56 np0005475493 podman[269277]: 2025-10-08 10:10:56.135938912 +0000 UTC m=+0.135026537 container attach 2d9a588ca6e45ce40b251ed01b9690c9c79d82cdf9a25ac2637bf79039340172 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Oct  8 06:10:56 np0005475493 sweet_mahavira[269294]: 167 167
Oct  8 06:10:56 np0005475493 systemd[1]: libpod-2d9a588ca6e45ce40b251ed01b9690c9c79d82cdf9a25ac2637bf79039340172.scope: Deactivated successfully.
Oct  8 06:10:56 np0005475493 podman[269277]: 2025-10-08 10:10:56.139336243 +0000 UTC m=+0.138423838 container died 2d9a588ca6e45ce40b251ed01b9690c9c79d82cdf9a25ac2637bf79039340172 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_mahavira, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:10:56 np0005475493 systemd[1]: var-lib-containers-storage-overlay-d80a2554b001469d27cd93a27bdf001601f0b2c2b5f0dcda226fd62219eecdbe-merged.mount: Deactivated successfully.
Oct  8 06:10:56 np0005475493 podman[269277]: 2025-10-08 10:10:56.185130548 +0000 UTC m=+0.184218153 container remove 2d9a588ca6e45ce40b251ed01b9690c9c79d82cdf9a25ac2637bf79039340172 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:10:56 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:10:56 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:10:56 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:10:56 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:10:56 np0005475493 systemd[1]: libpod-conmon-2d9a588ca6e45ce40b251ed01b9690c9c79d82cdf9a25ac2637bf79039340172.scope: Deactivated successfully.
Oct  8 06:10:56 np0005475493 nova_compute[262220]: 2025-10-08 10:10:56.221 2 DEBUG nova.network.neutron [req-ffc5d50d-66c1-448e-8eb8-d0ec7d1a7a25 req-8cc5e77a-bcb3-4e18-bc87-70bb063bae12 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Updated VIF entry in instance network info cache for port d6bc221b-bf28-4c61-b116-cd61209c7f31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  8 06:10:56 np0005475493 nova_compute[262220]: 2025-10-08 10:10:56.223 2 DEBUG nova.network.neutron [req-ffc5d50d-66c1-448e-8eb8-d0ec7d1a7a25 req-8cc5e77a-bcb3-4e18-bc87-70bb063bae12 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Updating instance_info_cache with network_info: [{"id": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "address": "fa:16:3e:9d:d1:5c", "network": {"id": "f5c6f88b-41ed-45ea-b491-931be9a75138", "bridge": "br-int", "label": "tempest-network-smoke--1726135850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6bc221b-bf", "ovs_interfaceid": "d6bc221b-bf28-4c61-b116-cd61209c7f31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:10:56 np0005475493 podman[269317]: 2025-10-08 10:10:56.371942784 +0000 UTC m=+0.055138822 container create c5335d511a0f91cfa82247b9f3110db348d7e037d689f9b91dd4f54e8685be95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_ptolemy, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  8 06:10:56 np0005475493 systemd[1]: Started libpod-conmon-c5335d511a0f91cfa82247b9f3110db348d7e037d689f9b91dd4f54e8685be95.scope.
Oct  8 06:10:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:56 np0005475493 podman[269317]: 2025-10-08 10:10:56.348159563 +0000 UTC m=+0.031355671 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:10:56 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:10:56 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e805d3b0f23eecf34c9fd92fee8abf7fbc87f114438935006337ffddfef2834a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:10:56 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e805d3b0f23eecf34c9fd92fee8abf7fbc87f114438935006337ffddfef2834a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:10:56 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e805d3b0f23eecf34c9fd92fee8abf7fbc87f114438935006337ffddfef2834a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:10:56 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e805d3b0f23eecf34c9fd92fee8abf7fbc87f114438935006337ffddfef2834a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:10:56 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e805d3b0f23eecf34c9fd92fee8abf7fbc87f114438935006337ffddfef2834a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:10:56 np0005475493 podman[269317]: 2025-10-08 10:10:56.466595114 +0000 UTC m=+0.149791152 container init c5335d511a0f91cfa82247b9f3110db348d7e037d689f9b91dd4f54e8685be95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:10:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:56.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:56 np0005475493 podman[269317]: 2025-10-08 10:10:56.476562821 +0000 UTC m=+0.159758859 container start c5335d511a0f91cfa82247b9f3110db348d7e037d689f9b91dd4f54e8685be95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_ptolemy, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct  8 06:10:56 np0005475493 podman[269317]: 2025-10-08 10:10:56.481075389 +0000 UTC m=+0.164271437 container attach c5335d511a0f91cfa82247b9f3110db348d7e037d689f9b91dd4f54e8685be95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:10:56 np0005475493 nova_compute[262220]: 2025-10-08 10:10:56.518 2 DEBUG oslo_concurrency.lockutils [req-ffc5d50d-66c1-448e-8eb8-d0ec7d1a7a25 req-8cc5e77a-bcb3-4e18-bc87-70bb063bae12 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-f49b788e-70d1-4bc2-9f90-381017f2b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:10:56 np0005475493 quizzical_ptolemy[269334]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:10:56 np0005475493 quizzical_ptolemy[269334]: --> All data devices are unavailable
Oct  8 06:10:56 np0005475493 systemd[1]: libpod-c5335d511a0f91cfa82247b9f3110db348d7e037d689f9b91dd4f54e8685be95.scope: Deactivated successfully.
Oct  8 06:10:56 np0005475493 conmon[269334]: conmon c5335d511a0f91cfa822 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c5335d511a0f91cfa82247b9f3110db348d7e037d689f9b91dd4f54e8685be95.scope/container/memory.events
Oct  8 06:10:56 np0005475493 podman[269317]: 2025-10-08 10:10:56.861131855 +0000 UTC m=+0.544327873 container died c5335d511a0f91cfa82247b9f3110db348d7e037d689f9b91dd4f54e8685be95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_ptolemy, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:10:56 np0005475493 systemd[1]: var-lib-containers-storage-overlay-e805d3b0f23eecf34c9fd92fee8abf7fbc87f114438935006337ffddfef2834a-merged.mount: Deactivated successfully.
Oct  8 06:10:56 np0005475493 podman[269317]: 2025-10-08 10:10:56.901467819 +0000 UTC m=+0.584663837 container remove c5335d511a0f91cfa82247b9f3110db348d7e037d689f9b91dd4f54e8685be95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_ptolemy, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:10:56 np0005475493 systemd[1]: libpod-conmon-c5335d511a0f91cfa82247b9f3110db348d7e037d689f9b91dd4f54e8685be95.scope: Deactivated successfully.
Oct  8 06:10:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:57 np0005475493 nova_compute[262220]: 2025-10-08 10:10:57.039 2 DEBUG nova.compute.manager [req-4bd4c50d-20e5-4093-ad77-6a0c8f3ea410 req-b4434f43-329b-4808-a2ec-d638eb2f16ce 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Received event network-vif-plugged-d6bc221b-bf28-4c61-b116-cd61209c7f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:10:57 np0005475493 nova_compute[262220]: 2025-10-08 10:10:57.040 2 DEBUG oslo_concurrency.lockutils [req-4bd4c50d-20e5-4093-ad77-6a0c8f3ea410 req-b4434f43-329b-4808-a2ec-d638eb2f16ce 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:10:57 np0005475493 nova_compute[262220]: 2025-10-08 10:10:57.040 2 DEBUG oslo_concurrency.lockutils [req-4bd4c50d-20e5-4093-ad77-6a0c8f3ea410 req-b4434f43-329b-4808-a2ec-d638eb2f16ce 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:10:57 np0005475493 nova_compute[262220]: 2025-10-08 10:10:57.041 2 DEBUG oslo_concurrency.lockutils [req-4bd4c50d-20e5-4093-ad77-6a0c8f3ea410 req-b4434f43-329b-4808-a2ec-d638eb2f16ce 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:10:57 np0005475493 nova_compute[262220]: 2025-10-08 10:10:57.041 2 DEBUG nova.compute.manager [req-4bd4c50d-20e5-4093-ad77-6a0c8f3ea410 req-b4434f43-329b-4808-a2ec-d638eb2f16ce 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] No waiting events found dispatching network-vif-plugged-d6bc221b-bf28-4c61-b116-cd61209c7f31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  8 06:10:57 np0005475493 nova_compute[262220]: 2025-10-08 10:10:57.041 2 WARNING nova.compute.manager [req-4bd4c50d-20e5-4093-ad77-6a0c8f3ea410 req-b4434f43-329b-4808-a2ec-d638eb2f16ce 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Received unexpected event network-vif-plugged-d6bc221b-bf28-4c61-b116-cd61209c7f31 for instance with vm_state active and task_state deleting.#033[00m
Oct  8 06:10:57 np0005475493 nova_compute[262220]: 2025-10-08 10:10:57.061 2 DEBUG nova.network.neutron [-] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:10:57 np0005475493 nova_compute[262220]: 2025-10-08 10:10:57.080 2 INFO nova.compute.manager [-] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Took 1.88 seconds to deallocate network for instance.#033[00m
Oct  8 06:10:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:10:57.127Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:10:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:10:57.128Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:10:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:57.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:57 np0005475493 nova_compute[262220]: 2025-10-08 10:10:57.129 2 DEBUG nova.compute.manager [req-67c2bf7d-64a5-4b56-ab38-76ecbfc8e0e0 req-e2a8d733-e0d1-4600-a37a-73bd3ee92768 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Received event network-vif-deleted-d6bc221b-bf28-4c61-b116-cd61209c7f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:10:57 np0005475493 nova_compute[262220]: 2025-10-08 10:10:57.138 2 DEBUG oslo_concurrency.lockutils [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:10:57 np0005475493 nova_compute[262220]: 2025-10-08 10:10:57.138 2 DEBUG oslo_concurrency.lockutils [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:10:57 np0005475493 nova_compute[262220]: 2025-10-08 10:10:57.187 2 DEBUG oslo_concurrency.processutils [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:10:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:10:57.408 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:10:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:10:57.409 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:10:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:10:57.409 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:10:57 np0005475493 podman[269474]: 2025-10-08 10:10:57.554550153 +0000 UTC m=+0.073824146 container create 402c737ac0ee75732dbffac9d328ea297f5d84c857a4ce442bdebcaaf2dea52a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  8 06:10:57 np0005475493 podman[269474]: 2025-10-08 10:10:57.523445561 +0000 UTC m=+0.042719584 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:10:57 np0005475493 systemd[1]: Started libpod-conmon-402c737ac0ee75732dbffac9d328ea297f5d84c857a4ce442bdebcaaf2dea52a.scope.
Oct  8 06:10:57 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:10:57 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:10:57 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/285040432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:10:57 np0005475493 podman[269474]: 2025-10-08 10:10:57.664799105 +0000 UTC m=+0.184073118 container init 402c737ac0ee75732dbffac9d328ea297f5d84c857a4ce442bdebcaaf2dea52a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:10:57 np0005475493 podman[269474]: 2025-10-08 10:10:57.671554337 +0000 UTC m=+0.190828330 container start 402c737ac0ee75732dbffac9d328ea297f5d84c857a4ce442bdebcaaf2dea52a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:10:57 np0005475493 podman[269474]: 2025-10-08 10:10:57.675093803 +0000 UTC m=+0.194367826 container attach 402c737ac0ee75732dbffac9d328ea297f5d84c857a4ce442bdebcaaf2dea52a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_zhukovsky, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct  8 06:10:57 np0005475493 goofy_zhukovsky[269490]: 167 167
Oct  8 06:10:57 np0005475493 systemd[1]: libpod-402c737ac0ee75732dbffac9d328ea297f5d84c857a4ce442bdebcaaf2dea52a.scope: Deactivated successfully.
Oct  8 06:10:57 np0005475493 podman[269474]: 2025-10-08 10:10:57.677523543 +0000 UTC m=+0.196797526 container died 402c737ac0ee75732dbffac9d328ea297f5d84c857a4ce442bdebcaaf2dea52a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_zhukovsky, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:10:57 np0005475493 nova_compute[262220]: 2025-10-08 10:10:57.680 2 DEBUG oslo_concurrency.processutils [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:10:57 np0005475493 nova_compute[262220]: 2025-10-08 10:10:57.692 2 DEBUG nova.compute.provider_tree [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:10:57 np0005475493 systemd[1]: var-lib-containers-storage-overlay-bc33d6d77ab6b74a557550b2163513f654d27a164ffdc2444660c5c1db0d1f3f-merged.mount: Deactivated successfully.
Oct  8 06:10:57 np0005475493 podman[269474]: 2025-10-08 10:10:57.720615449 +0000 UTC m=+0.239889442 container remove 402c737ac0ee75732dbffac9d328ea297f5d84c857a4ce442bdebcaaf2dea52a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  8 06:10:57 np0005475493 systemd[1]: libpod-conmon-402c737ac0ee75732dbffac9d328ea297f5d84c857a4ce442bdebcaaf2dea52a.scope: Deactivated successfully.
Oct  8 06:10:57 np0005475493 nova_compute[262220]: 2025-10-08 10:10:57.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:57 np0005475493 nova_compute[262220]: 2025-10-08 10:10:57.898 2 DEBUG nova.scheduler.client.report [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:10:57 np0005475493 podman[269518]: 2025-10-08 10:10:57.910885559 +0000 UTC m=+0.052374082 container create 5cd83e9ec5080b13ac43831e6ccd8cd39405c5e465bbbac263fbddccf74f8803 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_colden, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True)
Oct  8 06:10:57 np0005475493 systemd[1]: Started libpod-conmon-5cd83e9ec5080b13ac43831e6ccd8cd39405c5e465bbbac263fbddccf74f8803.scope.
Oct  8 06:10:57 np0005475493 podman[269518]: 2025-10-08 10:10:57.892608138 +0000 UTC m=+0.034096681 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:10:57 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:10:58 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f6b332470eb9df3c68e58d1c52413bce77286115e17f1cdee52344f177685e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:10:58 np0005475493 nova_compute[262220]: 2025-10-08 10:10:58.003 2 DEBUG oslo_concurrency.lockutils [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:10:58 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f6b332470eb9df3c68e58d1c52413bce77286115e17f1cdee52344f177685e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:10:58 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f6b332470eb9df3c68e58d1c52413bce77286115e17f1cdee52344f177685e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:10:58 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f6b332470eb9df3c68e58d1c52413bce77286115e17f1cdee52344f177685e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:10:58 np0005475493 podman[269518]: 2025-10-08 10:10:58.016477818 +0000 UTC m=+0.157966341 container init 5cd83e9ec5080b13ac43831e6ccd8cd39405c5e465bbbac263fbddccf74f8803 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_colden, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  8 06:10:58 np0005475493 podman[269518]: 2025-10-08 10:10:58.023505688 +0000 UTC m=+0.164994221 container start 5cd83e9ec5080b13ac43831e6ccd8cd39405c5e465bbbac263fbddccf74f8803 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct  8 06:10:58 np0005475493 podman[269518]: 2025-10-08 10:10:58.026986283 +0000 UTC m=+0.168474826 container attach 5cd83e9ec5080b13ac43831e6ccd8cd39405c5e465bbbac263fbddccf74f8803 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Oct  8 06:10:58 np0005475493 nova_compute[262220]: 2025-10-08 10:10:58.042 2 INFO nova.scheduler.client.report [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Deleted allocations for instance f49b788e-70d1-4bc2-9f90-381017f2b232#033[00m
Oct  8 06:10:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:58 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v791: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct  8 06:10:58 np0005475493 nova_compute[262220]: 2025-10-08 10:10:58.101 2 DEBUG oslo_concurrency.lockutils [None req-4ac922e2-0630-40c6-8951-49f328d415e4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "f49b788e-70d1-4bc2-9f90-381017f2b232" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:10:58 np0005475493 festive_colden[269535]: {
Oct  8 06:10:58 np0005475493 festive_colden[269535]:    "1": [
Oct  8 06:10:58 np0005475493 festive_colden[269535]:        {
Oct  8 06:10:58 np0005475493 festive_colden[269535]:            "devices": [
Oct  8 06:10:58 np0005475493 festive_colden[269535]:                "/dev/loop3"
Oct  8 06:10:58 np0005475493 festive_colden[269535]:            ],
Oct  8 06:10:58 np0005475493 festive_colden[269535]:            "lv_name": "ceph_lv0",
Oct  8 06:10:58 np0005475493 festive_colden[269535]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:10:58 np0005475493 festive_colden[269535]:            "lv_size": "21470642176",
Oct  8 06:10:58 np0005475493 festive_colden[269535]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:10:58 np0005475493 festive_colden[269535]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:10:58 np0005475493 festive_colden[269535]:            "name": "ceph_lv0",
Oct  8 06:10:58 np0005475493 festive_colden[269535]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:10:58 np0005475493 festive_colden[269535]:            "tags": {
Oct  8 06:10:58 np0005475493 festive_colden[269535]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:10:58 np0005475493 festive_colden[269535]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:10:58 np0005475493 festive_colden[269535]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:10:58 np0005475493 festive_colden[269535]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:10:58 np0005475493 festive_colden[269535]:                "ceph.cluster_name": "ceph",
Oct  8 06:10:58 np0005475493 festive_colden[269535]:                "ceph.crush_device_class": "",
Oct  8 06:10:58 np0005475493 festive_colden[269535]:                "ceph.encrypted": "0",
Oct  8 06:10:58 np0005475493 festive_colden[269535]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:10:58 np0005475493 festive_colden[269535]:                "ceph.osd_id": "1",
Oct  8 06:10:58 np0005475493 festive_colden[269535]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:10:58 np0005475493 festive_colden[269535]:                "ceph.type": "block",
Oct  8 06:10:58 np0005475493 festive_colden[269535]:                "ceph.vdo": "0",
Oct  8 06:10:58 np0005475493 festive_colden[269535]:                "ceph.with_tpm": "0"
Oct  8 06:10:58 np0005475493 festive_colden[269535]:            },
Oct  8 06:10:58 np0005475493 festive_colden[269535]:            "type": "block",
Oct  8 06:10:58 np0005475493 festive_colden[269535]:            "vg_name": "ceph_vg0"
Oct  8 06:10:58 np0005475493 festive_colden[269535]:        }
Oct  8 06:10:58 np0005475493 festive_colden[269535]:    ]
Oct  8 06:10:58 np0005475493 festive_colden[269535]: }
Oct  8 06:10:58 np0005475493 systemd[1]: libpod-5cd83e9ec5080b13ac43831e6ccd8cd39405c5e465bbbac263fbddccf74f8803.scope: Deactivated successfully.
Oct  8 06:10:58 np0005475493 podman[269518]: 2025-10-08 10:10:58.305976008 +0000 UTC m=+0.447464531 container died 5cd83e9ec5080b13ac43831e6ccd8cd39405c5e465bbbac263fbddccf74f8803 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True)
Oct  8 06:10:58 np0005475493 systemd[1]: var-lib-containers-storage-overlay-8f6b332470eb9df3c68e58d1c52413bce77286115e17f1cdee52344f177685e9-merged.mount: Deactivated successfully.
Oct  8 06:10:58 np0005475493 podman[269518]: 2025-10-08 10:10:58.348101621 +0000 UTC m=+0.489590144 container remove 5cd83e9ec5080b13ac43831e6ccd8cd39405c5e465bbbac263fbddccf74f8803 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:10:58 np0005475493 systemd[1]: libpod-conmon-5cd83e9ec5080b13ac43831e6ccd8cd39405c5e465bbbac263fbddccf74f8803.scope: Deactivated successfully.
Oct  8 06:10:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca00022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:10:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:10:58.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:10:58 np0005475493 podman[269648]: 2025-10-08 10:10:58.896527258 +0000 UTC m=+0.041434683 container create 88dc17a31515280b41d9a1fa364c4d1dcb59a5bb2786570405afc316393833ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_galois, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:10:58 np0005475493 systemd[1]: Started libpod-conmon-88dc17a31515280b41d9a1fa364c4d1dcb59a5bb2786570405afc316393833ed.scope.
Oct  8 06:10:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:10:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:10:58 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:10:58 np0005475493 podman[269648]: 2025-10-08 10:10:58.880220211 +0000 UTC m=+0.025127656 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:10:58 np0005475493 podman[269648]: 2025-10-08 10:10:58.978904743 +0000 UTC m=+0.123812188 container init 88dc17a31515280b41d9a1fa364c4d1dcb59a5bb2786570405afc316393833ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct  8 06:10:58 np0005475493 podman[269648]: 2025-10-08 10:10:58.991960243 +0000 UTC m=+0.136867678 container start 88dc17a31515280b41d9a1fa364c4d1dcb59a5bb2786570405afc316393833ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_galois, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  8 06:10:58 np0005475493 podman[269648]: 2025-10-08 10:10:58.995934873 +0000 UTC m=+0.140842298 container attach 88dc17a31515280b41d9a1fa364c4d1dcb59a5bb2786570405afc316393833ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_galois, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Oct  8 06:10:58 np0005475493 lucid_galois[269664]: 167 167
Oct  8 06:10:58 np0005475493 systemd[1]: libpod-88dc17a31515280b41d9a1fa364c4d1dcb59a5bb2786570405afc316393833ed.scope: Deactivated successfully.
Oct  8 06:10:58 np0005475493 podman[269648]: 2025-10-08 10:10:58.997423952 +0000 UTC m=+0.142331377 container died 88dc17a31515280b41d9a1fa364c4d1dcb59a5bb2786570405afc316393833ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_galois, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:10:59 np0005475493 systemd[1]: var-lib-containers-storage-overlay-bccc18c5e20d44ab47bb3838995fb5f35a31121347a1beb9236df24684c714d0-merged.mount: Deactivated successfully.
Oct  8 06:10:59 np0005475493 podman[269648]: 2025-10-08 10:10:59.037773407 +0000 UTC m=+0.182680832 container remove 88dc17a31515280b41d9a1fa364c4d1dcb59a5bb2786570405afc316393833ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:10:59 np0005475493 systemd[1]: libpod-conmon-88dc17a31515280b41d9a1fa364c4d1dcb59a5bb2786570405afc316393833ed.scope: Deactivated successfully.
Oct  8 06:10:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:10:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:10:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:10:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:10:59.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:10:59 np0005475493 podman[269689]: 2025-10-08 10:10:59.191732845 +0000 UTC m=+0.035391673 container create 4b5d314d8a48b0c41396f6482776ddba7c0ef7cfa4d41f0ce25eec669df90883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_thompson, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:10:59 np0005475493 systemd[1]: Started libpod-conmon-4b5d314d8a48b0c41396f6482776ddba7c0ef7cfa4d41f0ce25eec669df90883.scope.
Oct  8 06:10:59 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:10:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1fd6af2ca569e2d4f2ca0d5ace606ce8d6802a56f214dd0e38119e5ec9cd44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:10:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1fd6af2ca569e2d4f2ca0d5ace606ce8d6802a56f214dd0e38119e5ec9cd44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:10:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1fd6af2ca569e2d4f2ca0d5ace606ce8d6802a56f214dd0e38119e5ec9cd44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:10:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1fd6af2ca569e2d4f2ca0d5ace606ce8d6802a56f214dd0e38119e5ec9cd44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:10:59 np0005475493 podman[269689]: 2025-10-08 10:10:59.261563649 +0000 UTC m=+0.105222497 container init 4b5d314d8a48b0c41396f6482776ddba7c0ef7cfa4d41f0ce25eec669df90883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_thompson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:10:59 np0005475493 podman[269689]: 2025-10-08 10:10:59.268022311 +0000 UTC m=+0.111681139 container start 4b5d314d8a48b0c41396f6482776ddba7c0ef7cfa4d41f0ce25eec669df90883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_thompson, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  8 06:10:59 np0005475493 podman[269689]: 2025-10-08 10:10:59.176527826 +0000 UTC m=+0.020186664 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:10:59 np0005475493 podman[269689]: 2025-10-08 10:10:59.273439589 +0000 UTC m=+0.117098447 container attach 4b5d314d8a48b0c41396f6482776ddba7c0ef7cfa4d41f0ce25eec669df90883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_thompson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:10:59 np0005475493 nova_compute[262220]: 2025-10-08 10:10:59.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:10:59 np0005475493 lvm[269780]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:10:59 np0005475493 lvm[269780]: VG ceph_vg0 finished
Oct  8 06:10:59 np0005475493 wizardly_thompson[269706]: {}
Oct  8 06:11:00 np0005475493 systemd[1]: libpod-4b5d314d8a48b0c41396f6482776ddba7c0ef7cfa4d41f0ce25eec669df90883.scope: Deactivated successfully.
Oct  8 06:11:00 np0005475493 systemd[1]: libpod-4b5d314d8a48b0c41396f6482776ddba7c0ef7cfa4d41f0ce25eec669df90883.scope: Consumed 1.126s CPU time.
Oct  8 06:11:00 np0005475493 podman[269785]: 2025-10-08 10:11:00.071685952 +0000 UTC m=+0.027000839 container died 4b5d314d8a48b0c41396f6482776ddba7c0ef7cfa4d41f0ce25eec669df90883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_thompson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:11:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:00 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v792: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 4.3 KiB/s wr, 56 op/s
Oct  8 06:11:00 np0005475493 systemd[1]: var-lib-containers-storage-overlay-dc1fd6af2ca569e2d4f2ca0d5ace606ce8d6802a56f214dd0e38119e5ec9cd44-merged.mount: Deactivated successfully.
Oct  8 06:11:00 np0005475493 podman[269785]: 2025-10-08 10:11:00.116853256 +0000 UTC m=+0.072168123 container remove 4b5d314d8a48b0c41396f6482776ddba7c0ef7cfa4d41f0ce25eec669df90883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_thompson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:11:00 np0005475493 systemd[1]: libpod-conmon-4b5d314d8a48b0c41396f6482776ddba7c0ef7cfa4d41f0ce25eec669df90883.scope: Deactivated successfully.
Oct  8 06:11:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:11:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:11:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:11:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:11:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:00.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:01.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:01 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:11:01 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:11:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:02 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v793: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  8 06:11:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:02.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:02 np0005475493 nova_compute[262220]: 2025-10-08 10:11:02.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:11:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:11:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:11:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:03.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.217170) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918263217219, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 952, "num_deletes": 251, "total_data_size": 1483731, "memory_usage": 1513072, "flush_reason": "Manual Compaction"}
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918263225772, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1448051, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23835, "largest_seqno": 24786, "table_properties": {"data_size": 1443544, "index_size": 2095, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10469, "raw_average_key_size": 19, "raw_value_size": 1434284, "raw_average_value_size": 2711, "num_data_blocks": 94, "num_entries": 529, "num_filter_entries": 529, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759918189, "oldest_key_time": 1759918189, "file_creation_time": 1759918263, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 8625 microseconds, and 3652 cpu microseconds.
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.225802) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1448051 bytes OK
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.225818) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.227087) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.227098) EVENT_LOG_v1 {"time_micros": 1759918263227094, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.227110) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1479252, prev total WAL file size 1479252, number of live WAL files 2.
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.227700) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1414KB)], [53(12MB)]
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918263227772, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 14531023, "oldest_snapshot_seqno": -1}
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5369 keys, 12379342 bytes, temperature: kUnknown
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918263292059, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 12379342, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12344196, "index_size": 20636, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 137979, "raw_average_key_size": 25, "raw_value_size": 12247670, "raw_average_value_size": 2281, "num_data_blocks": 835, "num_entries": 5369, "num_filter_entries": 5369, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759918263, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.292269) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12379342 bytes
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.293454) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 225.9 rd, 192.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 12.5 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(18.6) write-amplify(8.5) OK, records in: 5885, records dropped: 516 output_compression: NoCompression
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.293468) EVENT_LOG_v1 {"time_micros": 1759918263293461, "job": 28, "event": "compaction_finished", "compaction_time_micros": 64331, "compaction_time_cpu_micros": 26413, "output_level": 6, "num_output_files": 1, "total_output_size": 12379342, "num_input_records": 5885, "num_output_records": 5369, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918263293751, "job": 28, "event": "table_file_deletion", "file_number": 55}
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918263295962, "job": 28, "event": "table_file_deletion", "file_number": 53}
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.227621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.296120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.296125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.296127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.296129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:11:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:11:03.296130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:11:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:11:04 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v794: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  8 06:11:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:04.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:04 np0005475493 nova_compute[262220]: 2025-10-08 10:11:04.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:05.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:05 np0005475493 nova_compute[262220]: 2025-10-08 10:11:05.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:05 np0005475493 nova_compute[262220]: 2025-10-08 10:11:05.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:11:05] "GET /metrics HTTP/1.1" 200 48440 "" "Prometheus/2.51.0"
Oct  8 06:11:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:11:05] "GET /metrics HTTP/1.1" 200 48440 "" "Prometheus/2.51.0"
Oct  8 06:11:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:06 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v795: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  8 06:11:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 06:11:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:06.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 06:11:06 np0005475493 podman[269857]: 2025-10-08 10:11:06.950903296 +0000 UTC m=+0.127401116 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  8 06:11:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:11:07.129Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:11:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:07.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:07 np0005475493 nova_compute[262220]: 2025-10-08 10:11:07.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:08 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v796: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  8 06:11:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:08.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:11:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:09.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:09 np0005475493 nova_compute[262220]: 2025-10-08 10:11:09.608 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759918254.6060555, f49b788e-70d1-4bc2-9f90-381017f2b232 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  8 06:11:09 np0005475493 nova_compute[262220]: 2025-10-08 10:11:09.608 2 INFO nova.compute.manager [-] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] VM Stopped (Lifecycle Event)#033[00m
Oct  8 06:11:09 np0005475493 nova_compute[262220]: 2025-10-08 10:11:09.630 2 DEBUG nova.compute.manager [None req-e6220c05-7f4a-4b31-aeab-99262f396f92 - - - - - -] [instance: f49b788e-70d1-4bc2-9f90-381017f2b232] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  8 06:11:09 np0005475493 nova_compute[262220]: 2025-10-08 10:11:09.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:10 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v797: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct  8 06:11:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:10.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:11.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v798: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:11:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:12.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:12 np0005475493 nova_compute[262220]: 2025-10-08 10:11:12.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:11:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:13.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:11:13 np0005475493 podman[269895]: 2025-10-08 10:11:13.894970261 +0000 UTC m=+0.053695045 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct  8 06:11:13 np0005475493 podman[269915]: 2025-10-08 10:11:13.983757347 +0000 UTC m=+0.057375055 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  8 06:11:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:11:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v799: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:11:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:14.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:14 np0005475493 nova_compute[262220]: 2025-10-08 10:11:14.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:11:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:15.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:11:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:11:15] "GET /metrics HTTP/1.1" 200 48440 "" "Prometheus/2.51.0"
Oct  8 06:11:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:11:15] "GET /metrics HTTP/1.1" 200 48440 "" "Prometheus/2.51.0"
Oct  8 06:11:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v800: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:11:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:16.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:11:17.129Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:11:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:11:17.129Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:11:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:11:17.130Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:11:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:17.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:17 np0005475493 nova_compute[262220]: 2025-10-08 10:11:17.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:11:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:11:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:11:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:11:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:18 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v801: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:11:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:11:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:11:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:11:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:11:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:18.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:11:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:19.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:19 np0005475493 nova_compute[262220]: 2025-10-08 10:11:19.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:20 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v802: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct  8 06:11:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:20.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:21.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:21 np0005475493 nova_compute[262220]: 2025-10-08 10:11:21.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:21 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:11:21.704 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  8 06:11:21 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:11:21.704 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  8 06:11:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:22 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v803: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  8 06:11:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:11:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:22.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:11:22 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:11:22.706 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:11:22 np0005475493 nova_compute[262220]: 2025-10-08 10:11:22.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 06:11:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:23.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 06:11:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:11:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:24 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v804: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct  8 06:11:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:24.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:24 np0005475493 nova_compute[262220]: 2025-10-08 10:11:24.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:24 np0005475493 podman[269946]: 2025-10-08 10:11:24.886380581 +0000 UTC m=+0.047842642 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  8 06:11:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:25.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:11:25] "GET /metrics HTTP/1.1" 200 48435 "" "Prometheus/2.51.0"
Oct  8 06:11:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:11:25] "GET /metrics HTTP/1.1" 200 48435 "" "Prometheus/2.51.0"
Oct  8 06:11:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v805: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct  8 06:11:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0032f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:26.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:11:27.130Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:11:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:11:27.131Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:11:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:27.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:27 np0005475493 nova_compute[262220]: 2025-10-08 10:11:27.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:28 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v806: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct  8 06:11:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:28.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0032f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:11:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:29.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:29 np0005475493 nova_compute[262220]: 2025-10-08 10:11:29.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:30 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v807: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct  8 06:11:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:30.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:31.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v808: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct  8 06:11:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:32.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:32 np0005475493 nova_compute[262220]: 2025-10-08 10:11:32.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:11:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:11:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:33.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:11:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:34 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v809: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Oct  8 06:11:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:34.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:34 np0005475493 nova_compute[262220]: 2025-10-08 10:11:34.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:35.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/101135 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 06:11:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:11:35] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Oct  8 06:11:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:11:35] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Oct  8 06:11:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:36 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v810: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  8 06:11:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:36.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:11:37.132Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:11:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:11:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:37.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:11:37 np0005475493 nova_compute[262220]: 2025-10-08 10:11:37.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:37 np0005475493 podman[270004]: 2025-10-08 10:11:37.943827564 +0000 UTC m=+0.109015762 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct  8 06:11:38 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v811: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  8 06:11:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:38.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:11:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:39.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:39 np0005475493 nova_compute[262220]: 2025-10-08 10:11:39.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:39 np0005475493 nova_compute[262220]: 2025-10-08 10:11:39.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:11:39 np0005475493 nova_compute[262220]: 2025-10-08 10:11:39.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:11:39 np0005475493 nova_compute[262220]: 2025-10-08 10:11:39.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:11:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v812: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct  8 06:11:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:40.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:40 np0005475493 ovn_controller[153187]: 2025-10-08T10:11:40Z|00036|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct  8 06:11:40 np0005475493 nova_compute[262220]: 2025-10-08 10:11:40.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:11:40 np0005475493 nova_compute[262220]: 2025-10-08 10:11:40.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  8 06:11:40 np0005475493 nova_compute[262220]: 2025-10-08 10:11:40.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:11:40 np0005475493 nova_compute[262220]: 2025-10-08 10:11:40.913 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:11:40 np0005475493 nova_compute[262220]: 2025-10-08 10:11:40.914 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:11:40 np0005475493 nova_compute[262220]: 2025-10-08 10:11:40.914 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:11:40 np0005475493 nova_compute[262220]: 2025-10-08 10:11:40.914 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:11:40 np0005475493 nova_compute[262220]: 2025-10-08 10:11:40.914 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:11:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:11:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:41.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:11:41 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:11:41 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/337810784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:11:41 np0005475493 nova_compute[262220]: 2025-10-08 10:11:41.404 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:11:41 np0005475493 nova_compute[262220]: 2025-10-08 10:11:41.593 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:11:41 np0005475493 nova_compute[262220]: 2025-10-08 10:11:41.594 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4587MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:11:41 np0005475493 nova_compute[262220]: 2025-10-08 10:11:41.594 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:11:41 np0005475493 nova_compute[262220]: 2025-10-08 10:11:41.594 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:11:41 np0005475493 nova_compute[262220]: 2025-10-08 10:11:41.672 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:11:41 np0005475493 nova_compute[262220]: 2025-10-08 10:11:41.673 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:11:41 np0005475493 nova_compute[262220]: 2025-10-08 10:11:41.689 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:11:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v813: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  8 06:11:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:11:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/987466161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:11:42 np0005475493 nova_compute[262220]: 2025-10-08 10:11:42.161 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:11:42 np0005475493 nova_compute[262220]: 2025-10-08 10:11:42.166 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:11:42 np0005475493 nova_compute[262220]: 2025-10-08 10:11:42.191 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:11:42 np0005475493 nova_compute[262220]: 2025-10-08 10:11:42.212 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:11:42 np0005475493 nova_compute[262220]: 2025-10-08 10:11:42.213 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:11:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:42.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:42 np0005475493 nova_compute[262220]: 2025-10-08 10:11:42.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:42 np0005475493 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  8 06:11:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:43 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:43.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:43 np0005475493 nova_compute[262220]: 2025-10-08 10:11:43.214 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:11:43 np0005475493 nova_compute[262220]: 2025-10-08 10:11:43.214 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:11:43 np0005475493 nova_compute[262220]: 2025-10-08 10:11:43.215 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:11:43 np0005475493 nova_compute[262220]: 2025-10-08 10:11:43.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:11:43 np0005475493 nova_compute[262220]: 2025-10-08 10:11:43.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  8 06:11:43 np0005475493 nova_compute[262220]: 2025-10-08 10:11:43.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  8 06:11:43 np0005475493 nova_compute[262220]: 2025-10-08 10:11:43.906 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  8 06:11:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:11:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v814: 353 pgs: 353 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Oct  8 06:11:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:11:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:44.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:11:44 np0005475493 nova_compute[262220]: 2025-10-08 10:11:44.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:44 np0005475493 podman[270087]: 2025-10-08 10:11:44.903300462 +0000 UTC m=+0.059741804 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  8 06:11:44 np0005475493 podman[270088]: 2025-10-08 10:11:44.916607469 +0000 UTC m=+0.063306580 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  8 06:11:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:45 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:45.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:11:45] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Oct  8 06:11:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:11:45] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Oct  8 06:11:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v815: 353 pgs: 353 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  8 06:11:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:46.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:47 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:11:47.133Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:11:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:11:47.133Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:11:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:47.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:11:47
Oct  8 06:11:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:11:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:11:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'volumes', '.mgr', '.nfs', 'vms', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'backups', 'cephfs.cephfs.data']
Oct  8 06:11:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:11:47 np0005475493 nova_compute[262220]: 2025-10-08 10:11:47.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:11:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:11:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:11:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00075666583235658 of space, bias 1.0, pg target 0.226999749706974 quantized to 32 (current 32)
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v816: 353 pgs: 353 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  8 06:11:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:11:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:11:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:48.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:49 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:11:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:49.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:49 np0005475493 nova_compute[262220]: 2025-10-08 10:11:49.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:50 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v817: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct  8 06:11:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:50.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:51 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:51.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:52 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v818: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct  8 06:11:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:52.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:52 np0005475493 nova_compute[262220]: 2025-10-08 10:11:52.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:53 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:53.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:11:54 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v819: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 369 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct  8 06:11:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:54.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:54 np0005475493 nova_compute[262220]: 2025-10-08 10:11:54.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:55 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:11:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:55.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:11:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:11:55] "GET /metrics HTTP/1.1" 200 48465 "" "Prometheus/2.51.0"
Oct  8 06:11:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:11:55] "GET /metrics HTTP/1.1" 200 48465 "" "Prometheus/2.51.0"
Oct  8 06:11:55 np0005475493 podman[270164]: 2025-10-08 10:11:55.91141358 +0000 UTC m=+0.073975952 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  8 06:11:56 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v820: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct  8 06:11:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004790 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:56.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:57 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:11:57.134Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:11:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:11:57.134Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:11:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:11:57.134Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:11:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:11:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:57.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:11:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:11:57.409 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:11:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:11:57.410 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:11:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:11:57.410 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:11:57 np0005475493 nova_compute[262220]: 2025-10-08 10:11:57.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:11:58 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v821: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct  8 06:11:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:11:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:11:58.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:11:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:11:59 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900047b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:11:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:11:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:11:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:11:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:11:59.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:11:59 np0005475493 nova_compute[262220]: 2025-10-08 10:11:59.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:12:00 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v822: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 14 KiB/s wr, 1 op/s
Oct  8 06:12:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:12:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:00.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:12:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:01 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:01 np0005475493 podman[270311]: 2025-10-08 10:12:01.08127557 +0000 UTC m=+0.060937743 container exec 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True)
Oct  8 06:12:01 np0005475493 podman[270311]: 2025-10-08 10:12:01.169206099 +0000 UTC m=+0.148868272 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:12:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:01.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:01 np0005475493 podman[270446]: 2025-10-08 10:12:01.655158522 +0000 UTC m=+0.055801234 container exec 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 06:12:01 np0005475493 podman[270446]: 2025-10-08 10:12:01.661730398 +0000 UTC m=+0.062373110 container exec_died 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 06:12:01 np0005475493 podman[270519]: 2025-10-08 10:12:01.921567894 +0000 UTC m=+0.045826546 container exec ff96eb6387ce0570d856673e2f9d2de072dc140ca0ed9b744a95fbf6029b655c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:12:01 np0005475493 podman[270519]: 2025-10-08 10:12:01.9333301 +0000 UTC m=+0.057588742 container exec_died ff96eb6387ce0570d856673e2f9d2de072dc140ca0ed9b744a95fbf6029b655c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  8 06:12:02 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v823: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 13 KiB/s wr, 0 op/s
Oct  8 06:12:02 np0005475493 podman[270590]: 2025-10-08 10:12:02.1260017 +0000 UTC m=+0.052339021 container exec 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct  8 06:12:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900047d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:02 np0005475493 podman[270590]: 2025-10-08 10:12:02.140386572 +0000 UTC m=+0.066723853 container exec_died 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct  8 06:12:02 np0005475493 podman[270655]: 2025-10-08 10:12:02.382672652 +0000 UTC m=+0.052828977 container exec 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, vcs-type=git, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, name=keepalived, architecture=x86_64, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793)
Oct  8 06:12:02 np0005475493 podman[270655]: 2025-10-08 10:12:02.395342988 +0000 UTC m=+0.065499303 container exec_died 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, io.openshift.expose-services=, name=keepalived, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vcs-type=git, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc.)
Oct  8 06:12:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:02.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:02 np0005475493 podman[270721]: 2025-10-08 10:12:02.590761007 +0000 UTC m=+0.048559836 container exec feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 06:12:02 np0005475493 podman[270721]: 2025-10-08 10:12:02.61886602 +0000 UTC m=+0.076664829 container exec_died feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 06:12:02 np0005475493 podman[270795]: 2025-10-08 10:12:02.796514317 +0000 UTC m=+0.041804385 container exec 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 06:12:02 np0005475493 nova_compute[262220]: 2025-10-08 10:12:02.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:12:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:12:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:12:02 np0005475493 podman[270795]: 2025-10-08 10:12:02.989968272 +0000 UTC m=+0.235258330 container exec_died 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 06:12:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:03 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:12:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:03.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:12:03 np0005475493 podman[270907]: 2025-10-08 10:12:03.339652409 +0000 UTC m=+0.066247038 container exec 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 06:12:03 np0005475493 podman[270907]: 2025-10-08 10:12:03.371353529 +0000 UTC m=+0.097948138 container exec_died 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 06:12:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:12:03 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:12:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:12:03 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:12:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:12:04 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:12:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:12:04 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:12:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:12:04 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:12:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:12:04 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:12:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:12:04 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:12:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:12:04 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:12:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:12:04 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:12:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:12:04 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v824: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 14 KiB/s wr, 0 op/s
Oct  8 06:12:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:04 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:12:04 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:12:04 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:12:04 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:12:04 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:12:04 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:12:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900047f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:04.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:04 np0005475493 podman[271127]: 2025-10-08 10:12:04.555479048 +0000 UTC m=+0.033985326 container create 31a48f51a8fe22041c90ca7083805d18274ea53c4bc6629633be2ae20358c501 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:12:04 np0005475493 systemd[1]: Started libpod-conmon-31a48f51a8fe22041c90ca7083805d18274ea53c4bc6629633be2ae20358c501.scope.
Oct  8 06:12:04 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:12:04 np0005475493 podman[271127]: 2025-10-08 10:12:04.634185214 +0000 UTC m=+0.112691512 container init 31a48f51a8fe22041c90ca7083805d18274ea53c4bc6629633be2ae20358c501 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_galileo, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:12:04 np0005475493 podman[271127]: 2025-10-08 10:12:04.540521088 +0000 UTC m=+0.019027386 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:12:04 np0005475493 podman[271127]: 2025-10-08 10:12:04.642180726 +0000 UTC m=+0.120687004 container start 31a48f51a8fe22041c90ca7083805d18274ea53c4bc6629633be2ae20358c501 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_galileo, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:12:04 np0005475493 xenodochial_galileo[271143]: 167 167
Oct  8 06:12:04 np0005475493 systemd[1]: libpod-31a48f51a8fe22041c90ca7083805d18274ea53c4bc6629633be2ae20358c501.scope: Deactivated successfully.
Oct  8 06:12:04 np0005475493 podman[271127]: 2025-10-08 10:12:04.650732917 +0000 UTC m=+0.129239245 container attach 31a48f51a8fe22041c90ca7083805d18274ea53c4bc6629633be2ae20358c501 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:12:04 np0005475493 podman[271127]: 2025-10-08 10:12:04.652463825 +0000 UTC m=+0.130970133 container died 31a48f51a8fe22041c90ca7083805d18274ea53c4bc6629633be2ae20358c501 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_galileo, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  8 06:12:04 np0005475493 systemd[1]: var-lib-containers-storage-overlay-47634be7e7d183265b59f1378c80519fc6df6accbed64e0480061fb1c5f03ed6-merged.mount: Deactivated successfully.
Oct  8 06:12:04 np0005475493 podman[271127]: 2025-10-08 10:12:04.706734737 +0000 UTC m=+0.185241055 container remove 31a48f51a8fe22041c90ca7083805d18274ea53c4bc6629633be2ae20358c501 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_galileo, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:12:04 np0005475493 systemd[1]: libpod-conmon-31a48f51a8fe22041c90ca7083805d18274ea53c4bc6629633be2ae20358c501.scope: Deactivated successfully.
Oct  8 06:12:04 np0005475493 nova_compute[262220]: 2025-10-08 10:12:04.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:12:04 np0005475493 podman[271169]: 2025-10-08 10:12:04.87759257 +0000 UTC m=+0.046628573 container create 1a6fecc0aabb7898a67b5eaa243869b28a430592e55ad1299e177af25bfbe4a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_herschel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  8 06:12:04 np0005475493 systemd[1]: Started libpod-conmon-1a6fecc0aabb7898a67b5eaa243869b28a430592e55ad1299e177af25bfbe4a9.scope.
Oct  8 06:12:04 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:12:04 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ebecc99f7ec47733021879dfe9ff4a60cd145c8096a68f14f5a70846b3be54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:12:04 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ebecc99f7ec47733021879dfe9ff4a60cd145c8096a68f14f5a70846b3be54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:12:04 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ebecc99f7ec47733021879dfe9ff4a60cd145c8096a68f14f5a70846b3be54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:12:04 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ebecc99f7ec47733021879dfe9ff4a60cd145c8096a68f14f5a70846b3be54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:12:04 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ebecc99f7ec47733021879dfe9ff4a60cd145c8096a68f14f5a70846b3be54/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:12:04 np0005475493 podman[271169]: 2025-10-08 10:12:04.857111078 +0000 UTC m=+0.026147101 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:12:04 np0005475493 podman[271169]: 2025-10-08 10:12:04.965366034 +0000 UTC m=+0.134402057 container init 1a6fecc0aabb7898a67b5eaa243869b28a430592e55ad1299e177af25bfbe4a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_herschel, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct  8 06:12:04 np0005475493 podman[271169]: 2025-10-08 10:12:04.97258873 +0000 UTC m=+0.141624723 container start 1a6fecc0aabb7898a67b5eaa243869b28a430592e55ad1299e177af25bfbe4a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_herschel, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  8 06:12:04 np0005475493 podman[271169]: 2025-10-08 10:12:04.976127497 +0000 UTC m=+0.145163530 container attach 1a6fecc0aabb7898a67b5eaa243869b28a430592e55ad1299e177af25bfbe4a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_herschel, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:12:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:05 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:05.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:05 np0005475493 clever_herschel[271185]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:12:05 np0005475493 clever_herschel[271185]: --> All data devices are unavailable
Oct  8 06:12:05 np0005475493 systemd[1]: libpod-1a6fecc0aabb7898a67b5eaa243869b28a430592e55ad1299e177af25bfbe4a9.scope: Deactivated successfully.
Oct  8 06:12:05 np0005475493 podman[271169]: 2025-10-08 10:12:05.290772924 +0000 UTC m=+0.459808937 container died 1a6fecc0aabb7898a67b5eaa243869b28a430592e55ad1299e177af25bfbe4a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_herschel, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True)
Oct  8 06:12:05 np0005475493 systemd[1]: var-lib-containers-storage-overlay-95ebecc99f7ec47733021879dfe9ff4a60cd145c8096a68f14f5a70846b3be54-merged.mount: Deactivated successfully.
Oct  8 06:12:05 np0005475493 podman[271169]: 2025-10-08 10:12:05.343284688 +0000 UTC m=+0.512320691 container remove 1a6fecc0aabb7898a67b5eaa243869b28a430592e55ad1299e177af25bfbe4a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_herschel, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:12:05 np0005475493 systemd[1]: libpod-conmon-1a6fecc0aabb7898a67b5eaa243869b28a430592e55ad1299e177af25bfbe4a9.scope: Deactivated successfully.
Oct  8 06:12:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:12:05] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct  8 06:12:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:12:05] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct  8 06:12:05 np0005475493 podman[271304]: 2025-10-08 10:12:05.906272923 +0000 UTC m=+0.047728839 container create 6c7fba861b48e35903464613a87ba5b16165552528190acb54501bbb2675edc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_margulis, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:12:05 np0005475493 systemd[1]: Started libpod-conmon-6c7fba861b48e35903464613a87ba5b16165552528190acb54501bbb2675edc2.scope.
Oct  8 06:12:05 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:12:05 np0005475493 podman[271304]: 2025-10-08 10:12:05.977354878 +0000 UTC m=+0.118810824 container init 6c7fba861b48e35903464613a87ba5b16165552528190acb54501bbb2675edc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_margulis, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:12:05 np0005475493 podman[271304]: 2025-10-08 10:12:05.887333811 +0000 UTC m=+0.028789777 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:12:05 np0005475493 podman[271304]: 2025-10-08 10:12:05.983676965 +0000 UTC m=+0.125132881 container start 6c7fba861b48e35903464613a87ba5b16165552528190acb54501bbb2675edc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_margulis, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:12:05 np0005475493 podman[271304]: 2025-10-08 10:12:05.987388747 +0000 UTC m=+0.128844693 container attach 6c7fba861b48e35903464613a87ba5b16165552528190acb54501bbb2675edc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Oct  8 06:12:05 np0005475493 focused_margulis[271322]: 167 167
Oct  8 06:12:05 np0005475493 systemd[1]: libpod-6c7fba861b48e35903464613a87ba5b16165552528190acb54501bbb2675edc2.scope: Deactivated successfully.
Oct  8 06:12:05 np0005475493 conmon[271322]: conmon 6c7fba861b48e3590346 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6c7fba861b48e35903464613a87ba5b16165552528190acb54501bbb2675edc2.scope/container/memory.events
Oct  8 06:12:05 np0005475493 podman[271304]: 2025-10-08 10:12:05.991852154 +0000 UTC m=+0.133308180 container died 6c7fba861b48e35903464613a87ba5b16165552528190acb54501bbb2675edc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Oct  8 06:12:06 np0005475493 systemd[1]: var-lib-containers-storage-overlay-ecfe23675fb3b0d4a819316cdf7cfef4259950e57586acf0603e22ef267147fa-merged.mount: Deactivated successfully.
Oct  8 06:12:06 np0005475493 podman[271304]: 2025-10-08 10:12:06.037358439 +0000 UTC m=+0.178814345 container remove 6c7fba861b48e35903464613a87ba5b16165552528190acb54501bbb2675edc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_margulis, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:12:06 np0005475493 systemd[1]: libpod-conmon-6c7fba861b48e35903464613a87ba5b16165552528190acb54501bbb2675edc2.scope: Deactivated successfully.
Oct  8 06:12:06 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v825: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 3.0 KiB/s wr, 0 op/s
Oct  8 06:12:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:06 np0005475493 podman[271346]: 2025-10-08 10:12:06.203237968 +0000 UTC m=+0.044958998 container create 3a5b5de9b1ba9fb735607356f1890ac7c5f1cde190c2359c261e4317d4e6c64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  8 06:12:06 np0005475493 systemd[1]: Started libpod-conmon-3a5b5de9b1ba9fb735607356f1890ac7c5f1cde190c2359c261e4317d4e6c64b.scope.
Oct  8 06:12:06 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:12:06 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e23dc5210568349a80673d8deb732ffb1a18cd09a1839a678d66c0c8877390f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:12:06 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e23dc5210568349a80673d8deb732ffb1a18cd09a1839a678d66c0c8877390f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:12:06 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e23dc5210568349a80673d8deb732ffb1a18cd09a1839a678d66c0c8877390f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:12:06 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e23dc5210568349a80673d8deb732ffb1a18cd09a1839a678d66c0c8877390f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:12:06 np0005475493 podman[271346]: 2025-10-08 10:12:06.263089334 +0000 UTC m=+0.104810394 container init 3a5b5de9b1ba9fb735607356f1890ac7c5f1cde190c2359c261e4317d4e6c64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  8 06:12:06 np0005475493 podman[271346]: 2025-10-08 10:12:06.271070497 +0000 UTC m=+0.112791527 container start 3a5b5de9b1ba9fb735607356f1890ac7c5f1cde190c2359c261e4317d4e6c64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  8 06:12:06 np0005475493 podman[271346]: 2025-10-08 10:12:06.27514347 +0000 UTC m=+0.116864520 container attach 3a5b5de9b1ba9fb735607356f1890ac7c5f1cde190c2359c261e4317d4e6c64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct  8 06:12:06 np0005475493 podman[271346]: 2025-10-08 10:12:06.183268132 +0000 UTC m=+0.024989182 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:12:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]: {
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:    "1": [
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:        {
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:            "devices": [
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:                "/dev/loop3"
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:            ],
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:            "lv_name": "ceph_lv0",
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:            "lv_size": "21470642176",
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:            "name": "ceph_lv0",
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:            "tags": {
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:                "ceph.cluster_name": "ceph",
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:                "ceph.crush_device_class": "",
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:                "ceph.encrypted": "0",
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:                "ceph.osd_id": "1",
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:                "ceph.type": "block",
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:                "ceph.vdo": "0",
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:                "ceph.with_tpm": "0"
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:            },
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:            "type": "block",
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:            "vg_name": "ceph_vg0"
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:        }
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]:    ]
Oct  8 06:12:06 np0005475493 inspiring_euler[271363]: }
Oct  8 06:12:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:06.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:06 np0005475493 systemd[1]: libpod-3a5b5de9b1ba9fb735607356f1890ac7c5f1cde190c2359c261e4317d4e6c64b.scope: Deactivated successfully.
Oct  8 06:12:06 np0005475493 podman[271346]: 2025-10-08 10:12:06.56524065 +0000 UTC m=+0.406961700 container died 3a5b5de9b1ba9fb735607356f1890ac7c5f1cde190c2359c261e4317d4e6c64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_euler, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  8 06:12:06 np0005475493 systemd[1]: var-lib-containers-storage-overlay-e23dc5210568349a80673d8deb732ffb1a18cd09a1839a678d66c0c8877390f1-merged.mount: Deactivated successfully.
Oct  8 06:12:06 np0005475493 podman[271346]: 2025-10-08 10:12:06.614282661 +0000 UTC m=+0.456003691 container remove 3a5b5de9b1ba9fb735607356f1890ac7c5f1cde190c2359c261e4317d4e6c64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_euler, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:12:06 np0005475493 systemd[1]: libpod-conmon-3a5b5de9b1ba9fb735607356f1890ac7c5f1cde190c2359c261e4317d4e6c64b.scope: Deactivated successfully.
Oct  8 06:12:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:07 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:12:07.136Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:12:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:12:07.136Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:12:07 np0005475493 podman[271502]: 2025-10-08 10:12:07.158587072 +0000 UTC m=+0.053004362 container create 63f80120a7261a4db34e6da9516900e6f297a54419df0c98c2f5d2c2359000a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  8 06:12:07 np0005475493 systemd[1]: Started libpod-conmon-63f80120a7261a4db34e6da9516900e6f297a54419df0c98c2f5d2c2359000a0.scope.
Oct  8 06:12:07 np0005475493 podman[271502]: 2025-10-08 10:12:07.12960913 +0000 UTC m=+0.024026450 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:12:07 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:12:07 np0005475493 podman[271502]: 2025-10-08 10:12:07.242740107 +0000 UTC m=+0.137157457 container init 63f80120a7261a4db34e6da9516900e6f297a54419df0c98c2f5d2c2359000a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_brahmagupta, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:12:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:12:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:07.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:12:07 np0005475493 podman[271502]: 2025-10-08 10:12:07.252172476 +0000 UTC m=+0.146589756 container start 63f80120a7261a4db34e6da9516900e6f297a54419df0c98c2f5d2c2359000a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_brahmagupta, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:12:07 np0005475493 podman[271502]: 2025-10-08 10:12:07.255743323 +0000 UTC m=+0.150160643 container attach 63f80120a7261a4db34e6da9516900e6f297a54419df0c98c2f5d2c2359000a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_brahmagupta, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True)
Oct  8 06:12:07 np0005475493 suspicious_brahmagupta[271520]: 167 167
Oct  8 06:12:07 np0005475493 systemd[1]: libpod-63f80120a7261a4db34e6da9516900e6f297a54419df0c98c2f5d2c2359000a0.scope: Deactivated successfully.
Oct  8 06:12:07 np0005475493 podman[271502]: 2025-10-08 10:12:07.259879009 +0000 UTC m=+0.154296309 container died 63f80120a7261a4db34e6da9516900e6f297a54419df0c98c2f5d2c2359000a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_brahmagupta, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct  8 06:12:07 np0005475493 systemd[1]: var-lib-containers-storage-overlay-fd24e48410452b4d1734a5625e4b3124d4804702bac6ae2a4e2b53f468ad9e35-merged.mount: Deactivated successfully.
Oct  8 06:12:07 np0005475493 podman[271502]: 2025-10-08 10:12:07.306940865 +0000 UTC m=+0.201358155 container remove 63f80120a7261a4db34e6da9516900e6f297a54419df0c98c2f5d2c2359000a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_brahmagupta, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  8 06:12:07 np0005475493 systemd[1]: libpod-conmon-63f80120a7261a4db34e6da9516900e6f297a54419df0c98c2f5d2c2359000a0.scope: Deactivated successfully.
Oct  8 06:12:07 np0005475493 podman[271544]: 2025-10-08 10:12:07.54664693 +0000 UTC m=+0.064812421 container create e54953d1b1c0dd5242d8241d3b6f0407242f4243b9c9d718daa96bd195e2031a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_payne, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct  8 06:12:07 np0005475493 systemd[1]: Started libpod-conmon-e54953d1b1c0dd5242d8241d3b6f0407242f4243b9c9d718daa96bd195e2031a.scope.
Oct  8 06:12:07 np0005475493 podman[271544]: 2025-10-08 10:12:07.523069995 +0000 UTC m=+0.041235536 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:12:07 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:12:07 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05b02c0914b29da858a791433e9fd4bf4e81973fa0f63dc51da2d4517843dfd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:12:07 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05b02c0914b29da858a791433e9fd4bf4e81973fa0f63dc51da2d4517843dfd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:12:07 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05b02c0914b29da858a791433e9fd4bf4e81973fa0f63dc51da2d4517843dfd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:12:07 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05b02c0914b29da858a791433e9fd4bf4e81973fa0f63dc51da2d4517843dfd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:12:07 np0005475493 podman[271544]: 2025-10-08 10:12:07.638799707 +0000 UTC m=+0.156965218 container init e54953d1b1c0dd5242d8241d3b6f0407242f4243b9c9d718daa96bd195e2031a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_payne, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:12:07 np0005475493 podman[271544]: 2025-10-08 10:12:07.646318783 +0000 UTC m=+0.164484274 container start e54953d1b1c0dd5242d8241d3b6f0407242f4243b9c9d718daa96bd195e2031a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_payne, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  8 06:12:07 np0005475493 podman[271544]: 2025-10-08 10:12:07.649473937 +0000 UTC m=+0.167639428 container attach e54953d1b1c0dd5242d8241d3b6f0407242f4243b9c9d718daa96bd195e2031a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_payne, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:12:07 np0005475493 nova_compute[262220]: 2025-10-08 10:12:07.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:12:08 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v826: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 3.0 KiB/s wr, 0 op/s
Oct  8 06:12:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:08 np0005475493 lvm[271649]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:12:08 np0005475493 lvm[271649]: VG ceph_vg0 finished
Oct  8 06:12:08 np0005475493 objective_payne[271561]: {}
Oct  8 06:12:08 np0005475493 systemd[1]: libpod-e54953d1b1c0dd5242d8241d3b6f0407242f4243b9c9d718daa96bd195e2031a.scope: Deactivated successfully.
Oct  8 06:12:08 np0005475493 systemd[1]: libpod-e54953d1b1c0dd5242d8241d3b6f0407242f4243b9c9d718daa96bd195e2031a.scope: Consumed 1.280s CPU time.
Oct  8 06:12:08 np0005475493 podman[271635]: 2025-10-08 10:12:08.425251922 +0000 UTC m=+0.100364178 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  8 06:12:08 np0005475493 podman[271666]: 2025-10-08 10:12:08.46446477 +0000 UTC m=+0.024180665 container died e54953d1b1c0dd5242d8241d3b6f0407242f4243b9c9d718daa96bd195e2031a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Oct  8 06:12:08 np0005475493 systemd[1]: var-lib-containers-storage-overlay-c05b02c0914b29da858a791433e9fd4bf4e81973fa0f63dc51da2d4517843dfd-merged.mount: Deactivated successfully.
Oct  8 06:12:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:08 np0005475493 podman[271666]: 2025-10-08 10:12:08.518098622 +0000 UTC m=+0.077814487 container remove e54953d1b1c0dd5242d8241d3b6f0407242f4243b9c9d718daa96bd195e2031a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct  8 06:12:08 np0005475493 systemd[1]: libpod-conmon-e54953d1b1c0dd5242d8241d3b6f0407242f4243b9c9d718daa96bd195e2031a.scope: Deactivated successfully.
Oct  8 06:12:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:08.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:12:08 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:12:08 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:12:08 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:12:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:09 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:12:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:12:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:09.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:12:09 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:12:09 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:12:09 np0005475493 nova_compute[262220]: 2025-10-08 10:12:09.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:12:10 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v827: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 7.0 KiB/s wr, 1 op/s
Oct  8 06:12:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:10.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:11 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:11.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v828: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 5.0 KiB/s wr, 1 op/s
Oct  8 06:12:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:12.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:12 np0005475493 nova_compute[262220]: 2025-10-08 10:12:12.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:12:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:13 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:13.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:12:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v829: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 6.2 KiB/s wr, 29 op/s
Oct  8 06:12:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:14.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:14 np0005475493 nova_compute[262220]: 2025-10-08 10:12:14.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:12:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:15 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:15.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:12:15] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct  8 06:12:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:12:15] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct  8 06:12:15 np0005475493 podman[271717]: 2025-10-08 10:12:15.900106881 +0000 UTC m=+0.055205734 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct  8 06:12:15 np0005475493 podman[271716]: 2025-10-08 10:12:15.90889313 +0000 UTC m=+0.066910539 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  8 06:12:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v830: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Oct  8 06:12:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0008f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:16.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:17 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:12:17.136Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:12:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:17.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:17 np0005475493 nova_compute[262220]: 2025-10-08 10:12:17.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:12:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:12:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:12:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:12:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:12:18 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v831: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Oct  8 06:12:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:12:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:12:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:12:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:12:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0008f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:18.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004940 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:12:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:19.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:19 np0005475493 nova_compute[262220]: 2025-10-08 10:12:19.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:12:20 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v832: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Oct  8 06:12:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:20.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:21 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:21.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:22 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v833: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  8 06:12:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004960 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:22.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:22 np0005475493 nova_compute[262220]: 2025-10-08 10:12:22.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:12:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:23 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:23.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:23 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:12:23.301 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  8 06:12:23 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:12:23.302 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  8 06:12:23 np0005475493 nova_compute[262220]: 2025-10-08 10:12:23.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:12:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:12:24 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v834: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  8 06:12:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:12:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:24.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:12:24 np0005475493 nova_compute[262220]: 2025-10-08 10:12:24.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:12:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:25 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 06:12:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:25.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 06:12:25 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:12:25.304 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:12:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:12:25] "GET /metrics HTTP/1.1" 200 48445 "" "Prometheus/2.51.0"
Oct  8 06:12:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:12:25] "GET /metrics HTTP/1.1" 200 48445 "" "Prometheus/2.51.0"
Oct  8 06:12:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v835: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 06:12:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0008f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:26.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:26 np0005475493 podman[271765]: 2025-10-08 10:12:26.902096719 +0000 UTC m=+0.063048381 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  8 06:12:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:27 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900049a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:12:27.137Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:12:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:27.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:27 np0005475493 nova_compute[262220]: 2025-10-08 10:12:27.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:12:28 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v836: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 06:12:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7000c220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:28.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:29 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0008f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:12:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:29.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:29 np0005475493 nova_compute[262220]: 2025-10-08 10:12:29.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:12:30 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v837: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct  8 06:12:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900049c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:12:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:30.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:12:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7000c220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:31.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v838: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 06:12:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:32.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:32 np0005475493 nova_compute[262220]: 2025-10-08 10:12:32.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:12:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:12:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:12:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:33 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:33.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:12:34 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v839: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 06:12:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7000c220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:12:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:34.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:12:34 np0005475493 nova_compute[262220]: 2025-10-08 10:12:34.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:12:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:35 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:35.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:12:35] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct  8 06:12:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:12:35] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct  8 06:12:36 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v840: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 06:12:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7000c220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:12:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:36.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:12:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:37 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:12:37.138Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:12:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:37.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:37 np0005475493 nova_compute[262220]: 2025-10-08 10:12:37.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:12:38 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v841: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 06:12:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:38.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:38 np0005475493 podman[271822]: 2025-10-08 10:12:38.967383015 +0000 UTC m=+0.120917853 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible)
Oct  8 06:12:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:39 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7000c220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:12:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:39.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:39 np0005475493 nova_compute[262220]: 2025-10-08 10:12:39.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:12:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v842: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct  8 06:12:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:12:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:40.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:12:40 np0005475493 nova_compute[262220]: 2025-10-08 10:12:40.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:12:40 np0005475493 nova_compute[262220]: 2025-10-08 10:12:40.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:12:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:41 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:41.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:41 np0005475493 nova_compute[262220]: 2025-10-08 10:12:41.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:12:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v843: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  8 06:12:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7000c220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:12:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:42.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:12:42 np0005475493 nova_compute[262220]: 2025-10-08 10:12:42.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:12:42 np0005475493 nova_compute[262220]: 2025-10-08 10:12:42.882 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:12:42 np0005475493 nova_compute[262220]: 2025-10-08 10:12:42.897 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:12:42 np0005475493 nova_compute[262220]: 2025-10-08 10:12:42.897 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  8 06:12:42 np0005475493 nova_compute[262220]: 2025-10-08 10:12:42.897 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:12:42 np0005475493 nova_compute[262220]: 2025-10-08 10:12:42.924 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:12:42 np0005475493 nova_compute[262220]: 2025-10-08 10:12:42.925 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:12:42 np0005475493 nova_compute[262220]: 2025-10-08 10:12:42.925 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:12:42 np0005475493 nova_compute[262220]: 2025-10-08 10:12:42.926 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:12:42 np0005475493 nova_compute[262220]: 2025-10-08 10:12:42.927 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:12:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:43 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004a60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 06:12:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:43.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 06:12:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:12:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2505331608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:12:43 np0005475493 nova_compute[262220]: 2025-10-08 10:12:43.409 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:12:43 np0005475493 nova_compute[262220]: 2025-10-08 10:12:43.593 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:12:43 np0005475493 nova_compute[262220]: 2025-10-08 10:12:43.595 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4600MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:12:43 np0005475493 nova_compute[262220]: 2025-10-08 10:12:43.595 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:12:43 np0005475493 nova_compute[262220]: 2025-10-08 10:12:43.595 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:12:43 np0005475493 nova_compute[262220]: 2025-10-08 10:12:43.649 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:12:43 np0005475493 nova_compute[262220]: 2025-10-08 10:12:43.650 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:12:43 np0005475493 nova_compute[262220]: 2025-10-08 10:12:43.672 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:12:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:12:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:12:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3856953484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:12:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v844: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  8 06:12:44 np0005475493 nova_compute[262220]: 2025-10-08 10:12:44.164 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:12:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:44 np0005475493 nova_compute[262220]: 2025-10-08 10:12:44.169 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:12:44 np0005475493 nova_compute[262220]: 2025-10-08 10:12:44.185 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:12:44 np0005475493 nova_compute[262220]: 2025-10-08 10:12:44.187 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:12:44 np0005475493 nova_compute[262220]: 2025-10-08 10:12:44.188 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:12:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7000c220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:44.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:44 np0005475493 nova_compute[262220]: 2025-10-08 10:12:44.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:12:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:45 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:45 np0005475493 nova_compute[262220]: 2025-10-08 10:12:45.178 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:12:45 np0005475493 nova_compute[262220]: 2025-10-08 10:12:45.178 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  8 06:12:45 np0005475493 nova_compute[262220]: 2025-10-08 10:12:45.178 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  8 06:12:45 np0005475493 nova_compute[262220]: 2025-10-08 10:12:45.195 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  8 06:12:45 np0005475493 nova_compute[262220]: 2025-10-08 10:12:45.196 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:12:45 np0005475493 nova_compute[262220]: 2025-10-08 10:12:45.196 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:12:45 np0005475493 nova_compute[262220]: 2025-10-08 10:12:45.196 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:12:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:45.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:12:45] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct  8 06:12:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:12:45] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct  8 06:12:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v845: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  8 06:12:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90004a60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:12:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:46.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:12:46 np0005475493 podman[271904]: 2025-10-08 10:12:46.681283738 +0000 UTC m=+0.060990384 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  8 06:12:46 np0005475493 podman[271905]: 2025-10-08 10:12:46.700995426 +0000 UTC m=+0.065812042 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  8 06:12:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:47 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:12:47.139Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:12:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:12:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:47.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:12:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:12:47
Oct  8 06:12:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:12:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:12:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['vms', 'default.rgw.control', '.nfs', 'backups', 'cephfs.cephfs.data', '.mgr', 'volumes', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log']
Oct  8 06:12:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:12:47 np0005475493 nova_compute[262220]: 2025-10-08 10:12:47.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:12:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:12:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:12:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:12:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v846: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:12:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:12:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:12:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:48.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:49 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:12:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:12:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:49.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:12:49 np0005475493 nova_compute[262220]: 2025-10-08 10:12:49.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:12:50 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v847: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct  8 06:12:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:50.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:51 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:12:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:51.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:12:52 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v848: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct  8 06:12:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:12:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:52.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:12:52 np0005475493 nova_compute[262220]: 2025-10-08 10:12:52.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:12:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:53 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:12:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:53.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:12:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:12:54 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v849: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  8 06:12:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:54.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:55 np0005475493 nova_compute[262220]: 2025-10-08 10:12:54.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:12:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:55 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:12:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:55.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:12:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:12:55] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Oct  8 06:12:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:12:55] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Oct  8 06:12:56 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v850: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  8 06:12:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:56.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:57 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0029f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:12:57.139Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:12:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:57.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:12:57.410 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:12:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:12:57.411 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:12:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:12:57.411 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:12:57 np0005475493 nova_compute[262220]: 2025-10-08 10:12:57.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:12:57 np0005475493 podman[271981]: 2025-10-08 10:12:57.932124792 +0000 UTC m=+0.079684518 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  8 06:12:58 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v851: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  8 06:12:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:12:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:12:58.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:12:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:12:59 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:12:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:12:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:12:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:12:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:12:59.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:13:00 np0005475493 nova_compute[262220]: 2025-10-08 10:13:00.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:13:00 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v852: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct  8 06:13:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0029f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:13:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:00.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:13:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:01 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:01.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:02 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v853: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct  8 06:13:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0029f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:02.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:02 np0005475493 nova_compute[262220]: 2025-10-08 10:13:02.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:13:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:13:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:13:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:03 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:03 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  8 06:13:03 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 5782 writes, 25K keys, 5782 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s#012Cumulative WAL: 5782 writes, 5782 syncs, 1.00 writes per sync, written: 0.05 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1544 writes, 6558 keys, 1544 commit groups, 1.0 writes per commit group, ingest: 11.14 MB, 0.02 MB/s#012Interval WAL: 1544 writes, 1544 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     95.0      0.42              0.10        14    0.030       0      0       0.0       0.0#012  L6      1/0   11.81 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.1    136.2    116.4      1.41              0.36        13    0.109     67K   6910       0.0       0.0#012 Sum      1/0   11.81 MB   0.0      0.2     0.0      0.1       0.2      0.1       0.0   5.1    105.1    111.5      1.83              0.46        27    0.068     67K   6910       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.3     94.5     93.3      0.77              0.15        10    0.077     29K   2558       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0    136.2    116.4      1.41              0.36        13    0.109     67K   6910       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     95.7      0.42              0.10        13    0.032       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     16.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.039, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.20 GB write, 0.11 MB/s write, 0.19 GB read, 0.11 MB/s read, 1.8 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f7a1ce3350#2 capacity: 304.00 MB usage: 15.15 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.00011 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(819,14.60 MB,4.80332%) FilterBlock(28,201.17 KB,0.064624%) IndexBlock(28,359.95 KB,0.115631%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  8 06:13:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:03.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:13:04 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v854: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct  8 06:13:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940032c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:13:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:04.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:13:05 np0005475493 nova_compute[262220]: 2025-10-08 10:13:05.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:13:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:05 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:05.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:13:05] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Oct  8 06:13:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:13:05] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Oct  8 06:13:06 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v855: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  8 06:13:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:06.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:07 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:13:07.141Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:13:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:13:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:07.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:13:07 np0005475493 nova_compute[262220]: 2025-10-08 10:13:07.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:13:08 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v856: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  8 06:13:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.002000066s ======
Oct  8 06:13:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:08.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000066s
Oct  8 06:13:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:09 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:13:09 np0005475493 podman[272087]: 2025-10-08 10:13:09.173853396 +0000 UTC m=+0.144319313 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  8 06:13:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:09.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:13:09 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:13:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:13:09 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:13:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:13:09 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:13:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:13:09 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:13:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:13:09 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:13:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:13:09 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:13:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:13:09 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:13:10 np0005475493 nova_compute[262220]: 2025-10-08 10:13:10.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:13:10 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v857: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct  8 06:13:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:10 np0005475493 podman[272237]: 2025-10-08 10:13:10.273911673 +0000 UTC m=+0.055025418 container create a1800684c5147a3e752852237e291da1047419a46061907609f8693507130600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  8 06:13:10 np0005475493 systemd[1]: Started libpod-conmon-a1800684c5147a3e752852237e291da1047419a46061907609f8693507130600.scope.
Oct  8 06:13:10 np0005475493 podman[272237]: 2025-10-08 10:13:10.249803711 +0000 UTC m=+0.030917476 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:13:10 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:13:10 np0005475493 podman[272237]: 2025-10-08 10:13:10.374862489 +0000 UTC m=+0.155976234 container init a1800684c5147a3e752852237e291da1047419a46061907609f8693507130600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_euler, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:13:10 np0005475493 podman[272237]: 2025-10-08 10:13:10.385991695 +0000 UTC m=+0.167105420 container start a1800684c5147a3e752852237e291da1047419a46061907609f8693507130600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_euler, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  8 06:13:10 np0005475493 podman[272237]: 2025-10-08 10:13:10.389116328 +0000 UTC m=+0.170230053 container attach a1800684c5147a3e752852237e291da1047419a46061907609f8693507130600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_euler, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  8 06:13:10 np0005475493 dreamy_euler[272254]: 167 167
Oct  8 06:13:10 np0005475493 systemd[1]: libpod-a1800684c5147a3e752852237e291da1047419a46061907609f8693507130600.scope: Deactivated successfully.
Oct  8 06:13:10 np0005475493 podman[272237]: 2025-10-08 10:13:10.394349999 +0000 UTC m=+0.175463724 container died a1800684c5147a3e752852237e291da1047419a46061907609f8693507130600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_euler, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:13:10 np0005475493 systemd[1]: var-lib-containers-storage-overlay-0388ff08ff862d5c57b6b83b0c9817809cfaf42f6172fe87b51a71c62f253057-merged.mount: Deactivated successfully.
Oct  8 06:13:10 np0005475493 podman[272237]: 2025-10-08 10:13:10.4375869 +0000 UTC m=+0.218700625 container remove a1800684c5147a3e752852237e291da1047419a46061907609f8693507130600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  8 06:13:10 np0005475493 systemd[1]: libpod-conmon-a1800684c5147a3e752852237e291da1047419a46061907609f8693507130600.scope: Deactivated successfully.
Oct  8 06:13:10 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:13:10 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:13:10 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:13:10 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:13:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:10.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:10 np0005475493 podman[272279]: 2025-10-08 10:13:10.630978842 +0000 UTC m=+0.050719256 container create 4f1c4d889b32557c79d5553efc4d3576ee3955caaf11532420ee8dce79405355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_babbage, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  8 06:13:10 np0005475493 systemd[1]: Started libpod-conmon-4f1c4d889b32557c79d5553efc4d3576ee3955caaf11532420ee8dce79405355.scope.
Oct  8 06:13:10 np0005475493 podman[272279]: 2025-10-08 10:13:10.60867998 +0000 UTC m=+0.028420444 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:13:10 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:13:10 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81afca82c7c86116e3b0321a91b5d4c1631dff51503a02b125fc685599b30bf7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:13:10 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81afca82c7c86116e3b0321a91b5d4c1631dff51503a02b125fc685599b30bf7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:13:10 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81afca82c7c86116e3b0321a91b5d4c1631dff51503a02b125fc685599b30bf7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:13:10 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81afca82c7c86116e3b0321a91b5d4c1631dff51503a02b125fc685599b30bf7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:13:10 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81afca82c7c86116e3b0321a91b5d4c1631dff51503a02b125fc685599b30bf7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:13:10 np0005475493 podman[272279]: 2025-10-08 10:13:10.736364885 +0000 UTC m=+0.156105319 container init 4f1c4d889b32557c79d5553efc4d3576ee3955caaf11532420ee8dce79405355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_babbage, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:13:10 np0005475493 podman[272279]: 2025-10-08 10:13:10.744360007 +0000 UTC m=+0.164100421 container start 4f1c4d889b32557c79d5553efc4d3576ee3955caaf11532420ee8dce79405355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_babbage, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  8 06:13:10 np0005475493 podman[272279]: 2025-10-08 10:13:10.747910754 +0000 UTC m=+0.167651188 container attach 4f1c4d889b32557c79d5553efc4d3576ee3955caaf11532420ee8dce79405355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_babbage, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:13:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:11 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:11 np0005475493 interesting_babbage[272295]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:13:11 np0005475493 interesting_babbage[272295]: --> All data devices are unavailable
Oct  8 06:13:11 np0005475493 systemd[1]: libpod-4f1c4d889b32557c79d5553efc4d3576ee3955caaf11532420ee8dce79405355.scope: Deactivated successfully.
Oct  8 06:13:11 np0005475493 podman[272279]: 2025-10-08 10:13:11.158190042 +0000 UTC m=+0.577930476 container died 4f1c4d889b32557c79d5553efc4d3576ee3955caaf11532420ee8dce79405355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_babbage, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  8 06:13:11 np0005475493 systemd[1]: var-lib-containers-storage-overlay-81afca82c7c86116e3b0321a91b5d4c1631dff51503a02b125fc685599b30bf7-merged.mount: Deactivated successfully.
Oct  8 06:13:11 np0005475493 podman[272279]: 2025-10-08 10:13:11.21506112 +0000 UTC m=+0.634801534 container remove 4f1c4d889b32557c79d5553efc4d3576ee3955caaf11532420ee8dce79405355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_babbage, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:13:11 np0005475493 systemd[1]: libpod-conmon-4f1c4d889b32557c79d5553efc4d3576ee3955caaf11532420ee8dce79405355.scope: Deactivated successfully.
Oct  8 06:13:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:13:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:11.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:13:11 np0005475493 podman[272414]: 2025-10-08 10:13:11.869577661 +0000 UTC m=+0.048237806 container create 1f410068d85182ebfbee133c46f0e7592336b25b326fec08ce361fbc37e31b70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_albattani, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:13:11 np0005475493 systemd[1]: Started libpod-conmon-1f410068d85182ebfbee133c46f0e7592336b25b326fec08ce361fbc37e31b70.scope.
Oct  8 06:13:11 np0005475493 podman[272414]: 2025-10-08 10:13:11.846367298 +0000 UTC m=+0.025027433 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:13:11 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:13:11 np0005475493 podman[272414]: 2025-10-08 10:13:11.966002859 +0000 UTC m=+0.144663004 container init 1f410068d85182ebfbee133c46f0e7592336b25b326fec08ce361fbc37e31b70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  8 06:13:11 np0005475493 podman[272414]: 2025-10-08 10:13:11.97578377 +0000 UTC m=+0.154443925 container start 1f410068d85182ebfbee133c46f0e7592336b25b326fec08ce361fbc37e31b70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_albattani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  8 06:13:11 np0005475493 nervous_albattani[272431]: 167 167
Oct  8 06:13:11 np0005475493 systemd[1]: libpod-1f410068d85182ebfbee133c46f0e7592336b25b326fec08ce361fbc37e31b70.scope: Deactivated successfully.
Oct  8 06:13:11 np0005475493 podman[272414]: 2025-10-08 10:13:11.985616814 +0000 UTC m=+0.164276979 container attach 1f410068d85182ebfbee133c46f0e7592336b25b326fec08ce361fbc37e31b70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:13:11 np0005475493 podman[272414]: 2025-10-08 10:13:11.986942807 +0000 UTC m=+0.165602922 container died 1f410068d85182ebfbee133c46f0e7592336b25b326fec08ce361fbc37e31b70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_albattani, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:13:12 np0005475493 systemd[1]: var-lib-containers-storage-overlay-f85fa1d4c027b5c982f624f8c7cb2c6b8c8cf749606fb31886978e54a5f29bf0-merged.mount: Deactivated successfully.
Oct  8 06:13:12 np0005475493 podman[272414]: 2025-10-08 10:13:12.025427291 +0000 UTC m=+0.204087406 container remove 1f410068d85182ebfbee133c46f0e7592336b25b326fec08ce361fbc37e31b70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_albattani, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct  8 06:13:12 np0005475493 systemd[1]: libpod-conmon-1f410068d85182ebfbee133c46f0e7592336b25b326fec08ce361fbc37e31b70.scope: Deactivated successfully.
Oct  8 06:13:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v858: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  8 06:13:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:12 np0005475493 podman[272458]: 2025-10-08 10:13:12.230892331 +0000 UTC m=+0.048804645 container create af1324ae037e9d8fa66e3ebee64bf8df85bb90e6ba5950d960f66b07b463bb58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  8 06:13:12 np0005475493 systemd[1]: Started libpod-conmon-af1324ae037e9d8fa66e3ebee64bf8df85bb90e6ba5950d960f66b07b463bb58.scope.
Oct  8 06:13:12 np0005475493 podman[272458]: 2025-10-08 10:13:12.211916367 +0000 UTC m=+0.029828681 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:13:12 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:13:12 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b24e694bc9e2a7079fd5a7e49b2948d2e867d657904ea38086b321d6c85b22d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:13:12 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b24e694bc9e2a7079fd5a7e49b2948d2e867d657904ea38086b321d6c85b22d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:13:12 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b24e694bc9e2a7079fd5a7e49b2948d2e867d657904ea38086b321d6c85b22d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:13:12 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b24e694bc9e2a7079fd5a7e49b2948d2e867d657904ea38086b321d6c85b22d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:13:12 np0005475493 podman[272458]: 2025-10-08 10:13:12.340639676 +0000 UTC m=+0.158552020 container init af1324ae037e9d8fa66e3ebee64bf8df85bb90e6ba5950d960f66b07b463bb58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_tu, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:13:12 np0005475493 podman[272458]: 2025-10-08 10:13:12.352135694 +0000 UTC m=+0.170047988 container start af1324ae037e9d8fa66e3ebee64bf8df85bb90e6ba5950d960f66b07b463bb58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_tu, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:13:12 np0005475493 podman[272458]: 2025-10-08 10:13:12.355679889 +0000 UTC m=+0.173592233 container attach af1324ae037e9d8fa66e3ebee64bf8df85bb90e6ba5950d960f66b07b463bb58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Oct  8 06:13:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 06:13:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:12.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 06:13:12 np0005475493 kind_tu[272474]: {
Oct  8 06:13:12 np0005475493 kind_tu[272474]:    "1": [
Oct  8 06:13:12 np0005475493 kind_tu[272474]:        {
Oct  8 06:13:12 np0005475493 kind_tu[272474]:            "devices": [
Oct  8 06:13:12 np0005475493 kind_tu[272474]:                "/dev/loop3"
Oct  8 06:13:12 np0005475493 kind_tu[272474]:            ],
Oct  8 06:13:12 np0005475493 kind_tu[272474]:            "lv_name": "ceph_lv0",
Oct  8 06:13:12 np0005475493 kind_tu[272474]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:13:12 np0005475493 kind_tu[272474]:            "lv_size": "21470642176",
Oct  8 06:13:12 np0005475493 kind_tu[272474]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:13:12 np0005475493 kind_tu[272474]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:13:12 np0005475493 kind_tu[272474]:            "name": "ceph_lv0",
Oct  8 06:13:12 np0005475493 kind_tu[272474]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:13:12 np0005475493 kind_tu[272474]:            "tags": {
Oct  8 06:13:12 np0005475493 kind_tu[272474]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:13:12 np0005475493 kind_tu[272474]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:13:12 np0005475493 kind_tu[272474]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:13:12 np0005475493 kind_tu[272474]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:13:12 np0005475493 kind_tu[272474]:                "ceph.cluster_name": "ceph",
Oct  8 06:13:12 np0005475493 kind_tu[272474]:                "ceph.crush_device_class": "",
Oct  8 06:13:12 np0005475493 kind_tu[272474]:                "ceph.encrypted": "0",
Oct  8 06:13:12 np0005475493 kind_tu[272474]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:13:12 np0005475493 kind_tu[272474]:                "ceph.osd_id": "1",
Oct  8 06:13:12 np0005475493 kind_tu[272474]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:13:12 np0005475493 kind_tu[272474]:                "ceph.type": "block",
Oct  8 06:13:12 np0005475493 kind_tu[272474]:                "ceph.vdo": "0",
Oct  8 06:13:12 np0005475493 kind_tu[272474]:                "ceph.with_tpm": "0"
Oct  8 06:13:12 np0005475493 kind_tu[272474]:            },
Oct  8 06:13:12 np0005475493 kind_tu[272474]:            "type": "block",
Oct  8 06:13:12 np0005475493 kind_tu[272474]:            "vg_name": "ceph_vg0"
Oct  8 06:13:12 np0005475493 kind_tu[272474]:        }
Oct  8 06:13:12 np0005475493 kind_tu[272474]:    ]
Oct  8 06:13:12 np0005475493 kind_tu[272474]: }
Oct  8 06:13:12 np0005475493 systemd[1]: libpod-af1324ae037e9d8fa66e3ebee64bf8df85bb90e6ba5950d960f66b07b463bb58.scope: Deactivated successfully.
Oct  8 06:13:12 np0005475493 podman[272458]: 2025-10-08 10:13:12.710446334 +0000 UTC m=+0.528358658 container died af1324ae037e9d8fa66e3ebee64bf8df85bb90e6ba5950d960f66b07b463bb58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_tu, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct  8 06:13:12 np0005475493 systemd[1]: var-lib-containers-storage-overlay-5b24e694bc9e2a7079fd5a7e49b2948d2e867d657904ea38086b321d6c85b22d-merged.mount: Deactivated successfully.
Oct  8 06:13:12 np0005475493 podman[272458]: 2025-10-08 10:13:12.765375798 +0000 UTC m=+0.583288092 container remove af1324ae037e9d8fa66e3ebee64bf8df85bb90e6ba5950d960f66b07b463bb58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_tu, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:13:12 np0005475493 systemd[1]: libpod-conmon-af1324ae037e9d8fa66e3ebee64bf8df85bb90e6ba5950d960f66b07b463bb58.scope: Deactivated successfully.
Oct  8 06:13:12 np0005475493 nova_compute[262220]: 2025-10-08 10:13:12.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:13:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:13 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 06:13:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:13.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 06:13:13 np0005475493 podman[272590]: 2025-10-08 10:13:13.477847084 +0000 UTC m=+0.043149539 container create db6498b457c8f8e30456ed3c36e090bf8db1c85fd32e7b0a30a4b930eebe96f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  8 06:13:13 np0005475493 systemd[1]: Started libpod-conmon-db6498b457c8f8e30456ed3c36e090bf8db1c85fd32e7b0a30a4b930eebe96f3.scope.
Oct  8 06:13:13 np0005475493 podman[272590]: 2025-10-08 10:13:13.45946712 +0000 UTC m=+0.024769575 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:13:13 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:13:13 np0005475493 podman[272590]: 2025-10-08 10:13:13.578307484 +0000 UTC m=+0.143609939 container init db6498b457c8f8e30456ed3c36e090bf8db1c85fd32e7b0a30a4b930eebe96f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct  8 06:13:13 np0005475493 podman[272590]: 2025-10-08 10:13:13.588777828 +0000 UTC m=+0.154080273 container start db6498b457c8f8e30456ed3c36e090bf8db1c85fd32e7b0a30a4b930eebe96f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:13:13 np0005475493 podman[272590]: 2025-10-08 10:13:13.593869695 +0000 UTC m=+0.159172180 container attach db6498b457c8f8e30456ed3c36e090bf8db1c85fd32e7b0a30a4b930eebe96f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct  8 06:13:13 np0005475493 confident_taussig[272607]: 167 167
Oct  8 06:13:13 np0005475493 systemd[1]: libpod-db6498b457c8f8e30456ed3c36e090bf8db1c85fd32e7b0a30a4b930eebe96f3.scope: Deactivated successfully.
Oct  8 06:13:13 np0005475493 podman[272590]: 2025-10-08 10:13:13.596568533 +0000 UTC m=+0.161871018 container died db6498b457c8f8e30456ed3c36e090bf8db1c85fd32e7b0a30a4b930eebe96f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Oct  8 06:13:13 np0005475493 systemd[1]: var-lib-containers-storage-overlay-8d96874b7e38e735b008dc94d49396f25a7709b540e0e835f2919c063c9663f0-merged.mount: Deactivated successfully.
Oct  8 06:13:13 np0005475493 podman[272590]: 2025-10-08 10:13:13.647957212 +0000 UTC m=+0.213259697 container remove db6498b457c8f8e30456ed3c36e090bf8db1c85fd32e7b0a30a4b930eebe96f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_taussig, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:13:13 np0005475493 systemd[1]: libpod-conmon-db6498b457c8f8e30456ed3c36e090bf8db1c85fd32e7b0a30a4b930eebe96f3.scope: Deactivated successfully.
Oct  8 06:13:13 np0005475493 podman[272630]: 2025-10-08 10:13:13.862676955 +0000 UTC m=+0.042971773 container create 83ab6173cc5181d744a6617084fb0d1b80c465bb1ebbabf035ab425ce0686390 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  8 06:13:13 np0005475493 systemd[1]: Started libpod-conmon-83ab6173cc5181d744a6617084fb0d1b80c465bb1ebbabf035ab425ce0686390.scope.
Oct  8 06:13:13 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:13:13 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53c9ae953af4c84206ac83106fe5548b474b7b24627c4bbbb2200a64aae1456c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:13:13 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53c9ae953af4c84206ac83106fe5548b474b7b24627c4bbbb2200a64aae1456c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:13:13 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53c9ae953af4c84206ac83106fe5548b474b7b24627c4bbbb2200a64aae1456c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:13:13 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53c9ae953af4c84206ac83106fe5548b474b7b24627c4bbbb2200a64aae1456c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:13:13 np0005475493 podman[272630]: 2025-10-08 10:13:13.846122301 +0000 UTC m=+0.026417119 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:13:13 np0005475493 podman[272630]: 2025-10-08 10:13:13.954787711 +0000 UTC m=+0.135082529 container init 83ab6173cc5181d744a6617084fb0d1b80c465bb1ebbabf035ab425ce0686390 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_edison, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:13:13 np0005475493 podman[272630]: 2025-10-08 10:13:13.96753186 +0000 UTC m=+0.147826658 container start 83ab6173cc5181d744a6617084fb0d1b80c465bb1ebbabf035ab425ce0686390 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_edison, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1)
Oct  8 06:13:13 np0005475493 podman[272630]: 2025-10-08 10:13:13.971788569 +0000 UTC m=+0.152083387 container attach 83ab6173cc5181d744a6617084fb0d1b80c465bb1ebbabf035ab425ce0686390 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  8 06:13:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:13:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v859: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  8 06:13:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:14.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:14 np0005475493 lvm[272722]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:13:14 np0005475493 lvm[272722]: VG ceph_vg0 finished
Oct  8 06:13:14 np0005475493 quirky_edison[272646]: {}
Oct  8 06:13:14 np0005475493 systemd[1]: libpod-83ab6173cc5181d744a6617084fb0d1b80c465bb1ebbabf035ab425ce0686390.scope: Deactivated successfully.
Oct  8 06:13:14 np0005475493 podman[272630]: 2025-10-08 10:13:14.763651432 +0000 UTC m=+0.943946260 container died 83ab6173cc5181d744a6617084fb0d1b80c465bb1ebbabf035ab425ce0686390 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct  8 06:13:14 np0005475493 systemd[1]: libpod-83ab6173cc5181d744a6617084fb0d1b80c465bb1ebbabf035ab425ce0686390.scope: Consumed 1.316s CPU time.
Oct  8 06:13:14 np0005475493 systemd[1]: var-lib-containers-storage-overlay-53c9ae953af4c84206ac83106fe5548b474b7b24627c4bbbb2200a64aae1456c-merged.mount: Deactivated successfully.
Oct  8 06:13:14 np0005475493 podman[272630]: 2025-10-08 10:13:14.804796804 +0000 UTC m=+0.985091602 container remove 83ab6173cc5181d744a6617084fb0d1b80c465bb1ebbabf035ab425ce0686390 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_edison, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:13:14 np0005475493 systemd[1]: libpod-conmon-83ab6173cc5181d744a6617084fb0d1b80c465bb1ebbabf035ab425ce0686390.scope: Deactivated successfully.
Oct  8 06:13:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:13:14 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:13:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:13:14 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:13:15 np0005475493 nova_compute[262220]: 2025-10-08 10:13:15.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:13:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:15 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:15 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:13:15 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:13:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:13:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:15.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:13:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:13:15] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Oct  8 06:13:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:13:15] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Oct  8 06:13:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v860: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 13 KiB/s wr, 0 op/s
Oct  8 06:13:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:16.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:16 np0005475493 podman[272765]: 2025-10-08 10:13:16.922668576 +0000 UTC m=+0.074524498 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  8 06:13:16 np0005475493 podman[272766]: 2025-10-08 10:13:16.94621472 +0000 UTC m=+0.098616971 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:13:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:17 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:13:17.143Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:13:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:13:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:17.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:13:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:13:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:13:17 np0005475493 nova_compute[262220]: 2025-10-08 10:13:17.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:13:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:13:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:13:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:13:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:13:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:13:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:13:18 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v861: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 13 KiB/s wr, 0 op/s
Oct  8 06:13:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:18.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:13:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:19.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:20 np0005475493 nova_compute[262220]: 2025-10-08 10:13:20.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:13:20 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v862: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 15 KiB/s wr, 1 op/s
Oct  8 06:13:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:20.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:21 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900029c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:21.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:22 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v863: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 2.3 KiB/s wr, 0 op/s
Oct  8 06:13:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:22.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:22 np0005475493 nova_compute[262220]: 2025-10-08 10:13:22.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:13:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:23 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:23.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:13:24 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v864: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  8 06:13:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900029c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:24.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:25 np0005475493 nova_compute[262220]: 2025-10-08 10:13:25.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:13:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:25 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:25.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:13:25] "GET /metrics HTTP/1.1" 200 48470 "" "Prometheus/2.51.0"
Oct  8 06:13:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:13:25] "GET /metrics HTTP/1.1" 200 48470 "" "Prometheus/2.51.0"
Oct  8 06:13:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v865: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  8 06:13:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:26 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:13:26.352 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  8 06:13:26 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:13:26.353 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  8 06:13:26 np0005475493 nova_compute[262220]: 2025-10-08 10:13:26.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:13:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900029c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:26.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:27 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:13:27.144Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:13:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:13:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:27.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:13:27 np0005475493 nova_compute[262220]: 2025-10-08 10:13:27.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:13:28 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v866: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  8 06:13:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:13:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:28.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:13:28 np0005475493 podman[272843]: 2025-10-08 10:13:28.904232924 +0000 UTC m=+0.068357666 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  8 06:13:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:29 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac900029c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:13:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:29.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:30 np0005475493 nova_compute[262220]: 2025-10-08 10:13:30.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:13:30 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v867: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct  8 06:13:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:30.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:31.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v868: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct  8 06:13:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:32.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:13:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:13:32 np0005475493 nova_compute[262220]: 2025-10-08 10:13:32.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:13:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:33 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:33.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:13:34 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v869: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct  8 06:13:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:13:34.354 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:13:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:13:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:34.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:13:35 np0005475493 nova_compute[262220]: 2025-10-08 10:13:35.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:13:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:35 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004b30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:35.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:13:35] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Oct  8 06:13:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:13:35] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Oct  8 06:13:36 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v870: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Oct  8 06:13:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:36.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:37 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:13:37.144Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:13:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:13:37.145Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:13:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:37.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:37 np0005475493 nova_compute[262220]: 2025-10-08 10:13:37.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:13:38 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v871: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Oct  8 06:13:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:38.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:39 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:13:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:13:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:39.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:13:39 np0005475493 podman[272875]: 2025-10-08 10:13:39.971730395 +0000 UTC m=+0.125011888 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  8 06:13:40 np0005475493 nova_compute[262220]: 2025-10-08 10:13:40.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:13:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v872: 353 pgs: 353 active+clean; 188 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Oct  8 06:13:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004b70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:13:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:40.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:13:40 np0005475493 nova_compute[262220]: 2025-10-08 10:13:40.901 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:13:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:41 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:41.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:41 np0005475493 nova_compute[262220]: 2025-10-08 10:13:41.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:13:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v873: 353 pgs: 353 active+clean; 188 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 129 KiB/s rd, 2.0 MiB/s wr, 39 op/s
Oct  8 06:13:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 06:13:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:42.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 06:13:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Oct  8 06:13:42 np0005475493 nova_compute[262220]: 2025-10-08 10:13:42.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:13:42 np0005475493 nova_compute[262220]: 2025-10-08 10:13:42.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:13:42 np0005475493 nova_compute[262220]: 2025-10-08 10:13:42.917 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:13:42 np0005475493 nova_compute[262220]: 2025-10-08 10:13:42.918 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:13:42 np0005475493 nova_compute[262220]: 2025-10-08 10:13:42.918 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:13:42 np0005475493 nova_compute[262220]: 2025-10-08 10:13:42.918 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:13:42 np0005475493 nova_compute[262220]: 2025-10-08 10:13:42.918 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:13:42 np0005475493 nova_compute[262220]: 2025-10-08 10:13:42.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:13:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:43 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004b90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:43.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:13:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/629483059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:13:43 np0005475493 nova_compute[262220]: 2025-10-08 10:13:43.450 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:13:43 np0005475493 nova_compute[262220]: 2025-10-08 10:13:43.606 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:13:43 np0005475493 nova_compute[262220]: 2025-10-08 10:13:43.607 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4579MB free_disk=59.8980827331543GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:13:43 np0005475493 nova_compute[262220]: 2025-10-08 10:13:43.607 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:13:43 np0005475493 nova_compute[262220]: 2025-10-08 10:13:43.607 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:13:43 np0005475493 nova_compute[262220]: 2025-10-08 10:13:43.660 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:13:43 np0005475493 nova_compute[262220]: 2025-10-08 10:13:43.661 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:13:43 np0005475493 nova_compute[262220]: 2025-10-08 10:13:43.677 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:13:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:13:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1760045672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:13:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:13:44 np0005475493 nova_compute[262220]: 2025-10-08 10:13:44.137 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:13:44 np0005475493 nova_compute[262220]: 2025-10-08 10:13:44.141 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:13:44 np0005475493 nova_compute[262220]: 2025-10-08 10:13:44.156 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:13:44 np0005475493 nova_compute[262220]: 2025-10-08 10:13:44.157 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:13:44 np0005475493 nova_compute[262220]: 2025-10-08 10:13:44.158 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:13:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v874: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 206 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct  8 06:13:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:44.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:45 np0005475493 nova_compute[262220]: 2025-10-08 10:13:45.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:13:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:45 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:45.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:13:45] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Oct  8 06:13:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:13:45] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Oct  8 06:13:46 np0005475493 nova_compute[262220]: 2025-10-08 10:13:46.158 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:13:46 np0005475493 nova_compute[262220]: 2025-10-08 10:13:46.159 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  8 06:13:46 np0005475493 nova_compute[262220]: 2025-10-08 10:13:46.159 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  8 06:13:46 np0005475493 nova_compute[262220]: 2025-10-08 10:13:46.172 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  8 06:13:46 np0005475493 nova_compute[262220]: 2025-10-08 10:13:46.172 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:13:46 np0005475493 nova_compute[262220]: 2025-10-08 10:13:46.173 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:13:46 np0005475493 nova_compute[262220]: 2025-10-08 10:13:46.174 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:13:46 np0005475493 nova_compute[262220]: 2025-10-08 10:13:46.174 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:13:46 np0005475493 nova_compute[262220]: 2025-10-08 10:13:46.174 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  8 06:13:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v875: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 206 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct  8 06:13:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004bb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:46.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:47 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:13:47.146Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:13:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:13:47.147Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:13:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:47.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:47 np0005475493 podman[272978]: 2025-10-08 10:13:47.595057385 +0000 UTC m=+0.055715621 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  8 06:13:47 np0005475493 podman[272979]: 2025-10-08 10:13:47.595617294 +0000 UTC m=+0.053336543 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  8 06:13:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:13:47
Oct  8 06:13:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:13:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:13:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['images', '.mgr', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', '.nfs', '.rgw.root', 'default.rgw.log', 'backups']
Oct  8 06:13:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:13:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:13:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:13:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:13:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:13:47 np0005475493 nova_compute[262220]: 2025-10-08 10:13:47.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015163204279807253 of space, bias 1.0, pg target 0.4548961283942176 quantized to 32 (current 32)
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v876: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 206 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct  8 06:13:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:13:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:13:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c004bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:48.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:49 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:13:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000034s ======
Oct  8 06:13:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:49.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct  8 06:13:50 np0005475493 nova_compute[262220]: 2025-10-08 10:13:50.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:13:50 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v877: 353 pgs: 353 active+clean; 144 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 217 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Oct  8 06:13:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:50.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:51 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:51.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:52 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v878: 353 pgs: 353 active+clean; 144 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 107 KiB/s wr, 35 op/s
Oct  8 06:13:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:52.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:52 np0005475493 nova_compute[262220]: 2025-10-08 10:13:52.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:13:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:53 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:53.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:13:54 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v879: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 108 KiB/s wr, 63 op/s
Oct  8 06:13:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:13:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:54.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:13:55 np0005475493 nova_compute[262220]: 2025-10-08 10:13:55.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:13:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:55 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:55.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:13:55] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct  8 06:13:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:13:55] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct  8 06:13:56 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v880: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 13 KiB/s wr, 44 op/s
Oct  8 06:13:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/101356 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 06:13:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:13:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:56.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:13:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:57 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:13:57.147Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:13:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:13:57.411 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:13:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:13:57.412 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:13:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:13:57.413 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:13:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:57.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:57 np0005475493 nova_compute[262220]: 2025-10-08 10:13:57.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:13:58 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v881: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 13 KiB/s wr, 44 op/s
Oct  8 06:13:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:13:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:13:58.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:13:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:13:59 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:13:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:13:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:13:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:13:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:13:59.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:13:59 np0005475493 podman[273029]: 2025-10-08 10:13:59.900275672 +0000 UTC m=+0.059751096 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  8 06:14:00 np0005475493 nova_compute[262220]: 2025-10-08 10:14:00.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:00 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v882: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 14 KiB/s wr, 56 op/s
Oct  8 06:14:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:00.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:01 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:01.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:02 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v883: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.7 KiB/s wr, 39 op/s
Oct  8 06:14:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:02.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:14:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:14:02 np0005475493 nova_compute[262220]: 2025-10-08 10:14:02.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:03 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:03.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:14:04 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v884: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 KiB/s wr, 40 op/s
Oct  8 06:14:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:04.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:14:05 np0005475493 nova_compute[262220]: 2025-10-08 10:14:05.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:05 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:14:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:05.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:14:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:14:05] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct  8 06:14:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:14:05] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct  8 06:14:06 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v885: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 682 B/s wr, 12 op/s
Oct  8 06:14:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:06.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:07 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:07.054 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  8 06:14:07 np0005475493 nova_compute[262220]: 2025-10-08 10:14:07.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:07 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:07.055 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  8 06:14:07 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:07.056 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:14:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:07 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:14:07.147Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:14:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:14:07.148Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:14:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:07.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:07 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:14:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:07 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:14:07 np0005475493 nova_compute[262220]: 2025-10-08 10:14:07.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:08 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v886: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 682 B/s wr, 12 op/s
Oct  8 06:14:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:08.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:09 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:14:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:14:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:09.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:14:10 np0005475493 nova_compute[262220]: 2025-10-08 10:14:10.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:10 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v887: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1023 B/s wr, 13 op/s
Oct  8 06:14:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:10.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:10 np0005475493 podman[273085]: 2025-10-08 10:14:10.979161017 +0000 UTC m=+0.138364057 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true)
Oct  8 06:14:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:11 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:14:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:11.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:14:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v888: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct  8 06:14:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:14:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:12.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:14:12 np0005475493 nova_compute[262220]: 2025-10-08 10:14:12.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:13 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:13.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:14:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v889: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 1 op/s
Oct  8 06:14:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:14.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:15 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:15 np0005475493 nova_compute[262220]: 2025-10-08 10:14:15.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:15.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:14:15] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct  8 06:14:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:14:15] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct  8 06:14:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 06:14:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:14:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 06:14:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:14:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:14:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:14:15 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:14:15 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:14:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v890: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Oct  8 06:14:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:14:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:14:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:14:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:14:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:14:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:14:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:14:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:14:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:14:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:14:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:14:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:14:16 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:14:16 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:14:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:16.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:16 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:14:16 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:14:16 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:14:16 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:14:16 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:14:16 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:14:16 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:14:16 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:14:17 np0005475493 podman[273359]: 2025-10-08 10:14:17.135476817 +0000 UTC m=+0.048639387 container create 010834d370fbda2d6f44a973093f9b5abd950550e12fd412ff9918597f281720 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  8 06:14:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:17 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:14:17.149Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:14:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:14:17.149Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:14:17 np0005475493 systemd[1]: Started libpod-conmon-010834d370fbda2d6f44a973093f9b5abd950550e12fd412ff9918597f281720.scope.
Oct  8 06:14:17 np0005475493 podman[273359]: 2025-10-08 10:14:17.111706282 +0000 UTC m=+0.024868872 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:14:17 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:14:17 np0005475493 podman[273359]: 2025-10-08 10:14:17.229788796 +0000 UTC m=+0.142951386 container init 010834d370fbda2d6f44a973093f9b5abd950550e12fd412ff9918597f281720 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_engelbart, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:14:17 np0005475493 podman[273359]: 2025-10-08 10:14:17.241992449 +0000 UTC m=+0.155155019 container start 010834d370fbda2d6f44a973093f9b5abd950550e12fd412ff9918597f281720 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_engelbart, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct  8 06:14:17 np0005475493 podman[273359]: 2025-10-08 10:14:17.247154625 +0000 UTC m=+0.160317195 container attach 010834d370fbda2d6f44a973093f9b5abd950550e12fd412ff9918597f281720 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_engelbart, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:14:17 np0005475493 cranky_engelbart[273377]: 167 167
Oct  8 06:14:17 np0005475493 systemd[1]: libpod-010834d370fbda2d6f44a973093f9b5abd950550e12fd412ff9918597f281720.scope: Deactivated successfully.
Oct  8 06:14:17 np0005475493 podman[273359]: 2025-10-08 10:14:17.249391088 +0000 UTC m=+0.162553658 container died 010834d370fbda2d6f44a973093f9b5abd950550e12fd412ff9918597f281720 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_engelbart, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:14:17 np0005475493 systemd[1]: var-lib-containers-storage-overlay-f21f1b8a8cdf6c1fff0dc6f5deca7c4031f2458fe4583634579e424ed965c287-merged.mount: Deactivated successfully.
Oct  8 06:14:17 np0005475493 podman[273359]: 2025-10-08 10:14:17.294531521 +0000 UTC m=+0.207694111 container remove 010834d370fbda2d6f44a973093f9b5abd950550e12fd412ff9918597f281720 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct  8 06:14:17 np0005475493 systemd[1]: libpod-conmon-010834d370fbda2d6f44a973093f9b5abd950550e12fd412ff9918597f281720.scope: Deactivated successfully.
Oct  8 06:14:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:17.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:17 np0005475493 podman[273403]: 2025-10-08 10:14:17.47642666 +0000 UTC m=+0.040986941 container create 6588d4daa366a7181f96ca9c673250e73182d4c9336cdad07b52be455ebe7aa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_dijkstra, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct  8 06:14:17 np0005475493 systemd[1]: Started libpod-conmon-6588d4daa366a7181f96ca9c673250e73182d4c9336cdad07b52be455ebe7aa4.scope.
Oct  8 06:14:17 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:14:17 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83b3759d586eddd89a9bb90905f5b57ba083701a22f070c801f50a0c851fea11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:14:17 np0005475493 podman[273403]: 2025-10-08 10:14:17.45808752 +0000 UTC m=+0.022647821 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:14:17 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83b3759d586eddd89a9bb90905f5b57ba083701a22f070c801f50a0c851fea11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:14:17 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83b3759d586eddd89a9bb90905f5b57ba083701a22f070c801f50a0c851fea11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:14:17 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83b3759d586eddd89a9bb90905f5b57ba083701a22f070c801f50a0c851fea11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:14:17 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83b3759d586eddd89a9bb90905f5b57ba083701a22f070c801f50a0c851fea11/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:14:17 np0005475493 podman[273403]: 2025-10-08 10:14:17.567276958 +0000 UTC m=+0.131837239 container init 6588d4daa366a7181f96ca9c673250e73182d4c9336cdad07b52be455ebe7aa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct  8 06:14:17 np0005475493 podman[273403]: 2025-10-08 10:14:17.577893689 +0000 UTC m=+0.142453970 container start 6588d4daa366a7181f96ca9c673250e73182d4c9336cdad07b52be455ebe7aa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_dijkstra, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  8 06:14:17 np0005475493 podman[273403]: 2025-10-08 10:14:17.581602769 +0000 UTC m=+0.146163070 container attach 6588d4daa366a7181f96ca9c673250e73182d4c9336cdad07b52be455ebe7aa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_dijkstra, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct  8 06:14:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:14:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:14:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:17 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:14:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:14:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:14:17 np0005475493 podman[273432]: 2025-10-08 10:14:17.926633505 +0000 UTC m=+0.069675926 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  8 06:14:17 np0005475493 charming_dijkstra[273420]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:14:17 np0005475493 charming_dijkstra[273420]: --> All data devices are unavailable
Oct  8 06:14:17 np0005475493 podman[273431]: 2025-10-08 10:14:17.932427932 +0000 UTC m=+0.075287868 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Oct  8 06:14:17 np0005475493 nova_compute[262220]: 2025-10-08 10:14:17.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:17 np0005475493 systemd[1]: libpod-6588d4daa366a7181f96ca9c673250e73182d4c9336cdad07b52be455ebe7aa4.scope: Deactivated successfully.
Oct  8 06:14:17 np0005475493 podman[273403]: 2025-10-08 10:14:17.970420775 +0000 UTC m=+0.534981066 container died 6588d4daa366a7181f96ca9c673250e73182d4c9336cdad07b52be455ebe7aa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_dijkstra, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  8 06:14:18 np0005475493 systemd[1]: var-lib-containers-storage-overlay-83b3759d586eddd89a9bb90905f5b57ba083701a22f070c801f50a0c851fea11-merged.mount: Deactivated successfully.
Oct  8 06:14:18 np0005475493 podman[273403]: 2025-10-08 10:14:18.030261913 +0000 UTC m=+0.594822194 container remove 6588d4daa366a7181f96ca9c673250e73182d4c9336cdad07b52be455ebe7aa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:14:18 np0005475493 systemd[1]: libpod-conmon-6588d4daa366a7181f96ca9c673250e73182d4c9336cdad07b52be455ebe7aa4.scope: Deactivated successfully.
Oct  8 06:14:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:14:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:14:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:14:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:14:18 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v891: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Oct  8 06:14:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:18 np0005475493 podman[273577]: 2025-10-08 10:14:18.681678778 +0000 UTC m=+0.055051664 container create 98aab81ff4d0f33449e09b33dfbb2f87989d972d6cbad579e232cfef34e22f7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kapitsa, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  8 06:14:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:18.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:18 np0005475493 systemd[1]: Started libpod-conmon-98aab81ff4d0f33449e09b33dfbb2f87989d972d6cbad579e232cfef34e22f7f.scope.
Oct  8 06:14:18 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:14:18 np0005475493 podman[273577]: 2025-10-08 10:14:18.652763426 +0000 UTC m=+0.026136332 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:14:18 np0005475493 podman[273577]: 2025-10-08 10:14:18.766234102 +0000 UTC m=+0.139607008 container init 98aab81ff4d0f33449e09b33dfbb2f87989d972d6cbad579e232cfef34e22f7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kapitsa, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  8 06:14:18 np0005475493 podman[273577]: 2025-10-08 10:14:18.776367008 +0000 UTC m=+0.149739914 container start 98aab81ff4d0f33449e09b33dfbb2f87989d972d6cbad579e232cfef34e22f7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  8 06:14:18 np0005475493 podman[273577]: 2025-10-08 10:14:18.780069538 +0000 UTC m=+0.153442444 container attach 98aab81ff4d0f33449e09b33dfbb2f87989d972d6cbad579e232cfef34e22f7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kapitsa, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  8 06:14:18 np0005475493 flamboyant_kapitsa[273594]: 167 167
Oct  8 06:14:18 np0005475493 systemd[1]: libpod-98aab81ff4d0f33449e09b33dfbb2f87989d972d6cbad579e232cfef34e22f7f.scope: Deactivated successfully.
Oct  8 06:14:18 np0005475493 podman[273577]: 2025-10-08 10:14:18.784649205 +0000 UTC m=+0.158022091 container died 98aab81ff4d0f33449e09b33dfbb2f87989d972d6cbad579e232cfef34e22f7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct  8 06:14:18 np0005475493 systemd[1]: var-lib-containers-storage-overlay-bf572df37a33e01d090573b7fc0a768b59f1819511c38405fd5fed7f5c6f71d3-merged.mount: Deactivated successfully.
Oct  8 06:14:18 np0005475493 podman[273577]: 2025-10-08 10:14:18.823430494 +0000 UTC m=+0.196803380 container remove 98aab81ff4d0f33449e09b33dfbb2f87989d972d6cbad579e232cfef34e22f7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kapitsa, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:14:18 np0005475493 systemd[1]: libpod-conmon-98aab81ff4d0f33449e09b33dfbb2f87989d972d6cbad579e232cfef34e22f7f.scope: Deactivated successfully.
Oct  8 06:14:19 np0005475493 podman[273618]: 2025-10-08 10:14:19.002584875 +0000 UTC m=+0.043934995 container create 9aab5b6e364513c8f6779937e7d564ad530a1f1829480b7aa866b087110094ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_kirch, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  8 06:14:19 np0005475493 systemd[1]: Started libpod-conmon-9aab5b6e364513c8f6779937e7d564ad530a1f1829480b7aa866b087110094ec.scope.
Oct  8 06:14:19 np0005475493 podman[273618]: 2025-10-08 10:14:18.9844074 +0000 UTC m=+0.025757540 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:14:19 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:14:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe7cdf5c83d3944c1fed05d4cf38a90175bf6578d0c1ee7129817e4982dcc133/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:14:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe7cdf5c83d3944c1fed05d4cf38a90175bf6578d0c1ee7129817e4982dcc133/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:14:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe7cdf5c83d3944c1fed05d4cf38a90175bf6578d0c1ee7129817e4982dcc133/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:14:19 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe7cdf5c83d3944c1fed05d4cf38a90175bf6578d0c1ee7129817e4982dcc133/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:14:19 np0005475493 podman[273618]: 2025-10-08 10:14:19.116186566 +0000 UTC m=+0.157536706 container init 9aab5b6e364513c8f6779937e7d564ad530a1f1829480b7aa866b087110094ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct  8 06:14:19 np0005475493 podman[273618]: 2025-10-08 10:14:19.123888084 +0000 UTC m=+0.165238214 container start 9aab5b6e364513c8f6779937e7d564ad530a1f1829480b7aa866b087110094ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_kirch, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct  8 06:14:19 np0005475493 podman[273618]: 2025-10-08 10:14:19.127652915 +0000 UTC m=+0.169003145 container attach 9aab5b6e364513c8f6779937e7d564ad530a1f1829480b7aa866b087110094ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_kirch, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  8 06:14:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90000cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]: {
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:    "1": [
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:        {
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:            "devices": [
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:                "/dev/loop3"
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:            ],
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:            "lv_name": "ceph_lv0",
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:            "lv_size": "21470642176",
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:            "name": "ceph_lv0",
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:            "tags": {
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:                "ceph.cluster_name": "ceph",
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:                "ceph.crush_device_class": "",
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:                "ceph.encrypted": "0",
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:                "ceph.osd_id": "1",
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:                "ceph.type": "block",
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:                "ceph.vdo": "0",
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:                "ceph.with_tpm": "0"
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:            },
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:            "type": "block",
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:            "vg_name": "ceph_vg0"
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:        }
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]:    ]
Oct  8 06:14:19 np0005475493 dazzling_kirch[273635]: }
Oct  8 06:14:19 np0005475493 systemd[1]: libpod-9aab5b6e364513c8f6779937e7d564ad530a1f1829480b7aa866b087110094ec.scope: Deactivated successfully.
Oct  8 06:14:19 np0005475493 podman[273618]: 2025-10-08 10:14:19.422514343 +0000 UTC m=+0.463864473 container died 9aab5b6e364513c8f6779937e7d564ad530a1f1829480b7aa866b087110094ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_kirch, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:14:19 np0005475493 systemd[1]: var-lib-containers-storage-overlay-fe7cdf5c83d3944c1fed05d4cf38a90175bf6578d0c1ee7129817e4982dcc133-merged.mount: Deactivated successfully.
Oct  8 06:14:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:19.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:19 np0005475493 podman[273618]: 2025-10-08 10:14:19.476017257 +0000 UTC m=+0.517367377 container remove 9aab5b6e364513c8f6779937e7d564ad530a1f1829480b7aa866b087110094ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_kirch, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:14:19 np0005475493 systemd[1]: libpod-conmon-9aab5b6e364513c8f6779937e7d564ad530a1f1829480b7aa866b087110094ec.scope: Deactivated successfully.
Oct  8 06:14:20 np0005475493 podman[273749]: 2025-10-08 10:14:20.129268901 +0000 UTC m=+0.104551499 container create 271d06a340e5a6150bb005e670e855f98f4eebfbcdfdc45a5f45daa582d1c249 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_heyrovsky, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:14:20 np0005475493 podman[273749]: 2025-10-08 10:14:20.049277114 +0000 UTC m=+0.024559732 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:14:20 np0005475493 nova_compute[262220]: 2025-10-08 10:14:20.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:20 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v892: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 341 B/s wr, 1 op/s
Oct  8 06:14:20 np0005475493 systemd[1]: Started libpod-conmon-271d06a340e5a6150bb005e670e855f98f4eebfbcdfdc45a5f45daa582d1c249.scope.
Oct  8 06:14:20 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:14:20 np0005475493 podman[273749]: 2025-10-08 10:14:20.241883779 +0000 UTC m=+0.217166397 container init 271d06a340e5a6150bb005e670e855f98f4eebfbcdfdc45a5f45daa582d1c249 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_heyrovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:14:20 np0005475493 podman[273749]: 2025-10-08 10:14:20.251217919 +0000 UTC m=+0.226500517 container start 271d06a340e5a6150bb005e670e855f98f4eebfbcdfdc45a5f45daa582d1c249 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:14:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003540 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:20 np0005475493 podman[273749]: 2025-10-08 10:14:20.257425319 +0000 UTC m=+0.232707947 container attach 271d06a340e5a6150bb005e670e855f98f4eebfbcdfdc45a5f45daa582d1c249 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:14:20 np0005475493 eager_heyrovsky[273766]: 167 167
Oct  8 06:14:20 np0005475493 systemd[1]: libpod-271d06a340e5a6150bb005e670e855f98f4eebfbcdfdc45a5f45daa582d1c249.scope: Deactivated successfully.
Oct  8 06:14:20 np0005475493 podman[273749]: 2025-10-08 10:14:20.261208751 +0000 UTC m=+0.236491369 container died 271d06a340e5a6150bb005e670e855f98f4eebfbcdfdc45a5f45daa582d1c249 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  8 06:14:20 np0005475493 systemd[1]: var-lib-containers-storage-overlay-2c11a71ab188b9541b790feb6be18ef39b94facbd5af11d76014eb9b375fab6d-merged.mount: Deactivated successfully.
Oct  8 06:14:20 np0005475493 podman[273749]: 2025-10-08 10:14:20.322399703 +0000 UTC m=+0.297682321 container remove 271d06a340e5a6150bb005e670e855f98f4eebfbcdfdc45a5f45daa582d1c249 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_heyrovsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:14:20 np0005475493 systemd[1]: libpod-conmon-271d06a340e5a6150bb005e670e855f98f4eebfbcdfdc45a5f45daa582d1c249.scope: Deactivated successfully.
Oct  8 06:14:20 np0005475493 podman[273789]: 2025-10-08 10:14:20.494182866 +0000 UTC m=+0.047543973 container create de3a609625f86acdfcd3c47bf98aa075604b5bd2959fe603c260cb252c6aaf6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:14:20 np0005475493 systemd[1]: Started libpod-conmon-de3a609625f86acdfcd3c47bf98aa075604b5bd2959fe603c260cb252c6aaf6a.scope.
Oct  8 06:14:20 np0005475493 podman[273789]: 2025-10-08 10:14:20.474796091 +0000 UTC m=+0.028157198 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:14:20 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:14:20 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde8412c32bc848acfbc649bcf2888971730c18caaae85c1968416e1a372d6c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:14:20 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde8412c32bc848acfbc649bcf2888971730c18caaae85c1968416e1a372d6c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:14:20 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde8412c32bc848acfbc649bcf2888971730c18caaae85c1968416e1a372d6c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:14:20 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde8412c32bc848acfbc649bcf2888971730c18caaae85c1968416e1a372d6c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:14:20 np0005475493 podman[273789]: 2025-10-08 10:14:20.598222668 +0000 UTC m=+0.151584155 container init de3a609625f86acdfcd3c47bf98aa075604b5bd2959fe603c260cb252c6aaf6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:14:20 np0005475493 podman[273789]: 2025-10-08 10:14:20.613574233 +0000 UTC m=+0.166935350 container start de3a609625f86acdfcd3c47bf98aa075604b5bd2959fe603c260cb252c6aaf6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_borg, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  8 06:14:20 np0005475493 podman[273789]: 2025-10-08 10:14:20.61785584 +0000 UTC m=+0.171216957 container attach de3a609625f86acdfcd3c47bf98aa075604b5bd2959fe603c260cb252c6aaf6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_borg, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:14:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:20.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:21 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:21 np0005475493 lvm[273880]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:14:21 np0005475493 lvm[273880]: VG ceph_vg0 finished
Oct  8 06:14:21 np0005475493 intelligent_borg[273805]: {}
Oct  8 06:14:21 np0005475493 systemd[1]: libpod-de3a609625f86acdfcd3c47bf98aa075604b5bd2959fe603c260cb252c6aaf6a.scope: Deactivated successfully.
Oct  8 06:14:21 np0005475493 systemd[1]: libpod-de3a609625f86acdfcd3c47bf98aa075604b5bd2959fe603c260cb252c6aaf6a.scope: Consumed 1.305s CPU time.
Oct  8 06:14:21 np0005475493 podman[273789]: 2025-10-08 10:14:21.382214213 +0000 UTC m=+0.935575320 container died de3a609625f86acdfcd3c47bf98aa075604b5bd2959fe603c260cb252c6aaf6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_borg, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  8 06:14:21 np0005475493 systemd[1]: var-lib-containers-storage-overlay-dde8412c32bc848acfbc649bcf2888971730c18caaae85c1968416e1a372d6c4-merged.mount: Deactivated successfully.
Oct  8 06:14:21 np0005475493 podman[273789]: 2025-10-08 10:14:21.434688844 +0000 UTC m=+0.988049931 container remove de3a609625f86acdfcd3c47bf98aa075604b5bd2959fe603c260cb252c6aaf6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  8 06:14:21 np0005475493 systemd[1]: libpod-conmon-de3a609625f86acdfcd3c47bf98aa075604b5bd2959fe603c260cb252c6aaf6a.scope: Deactivated successfully.
Oct  8 06:14:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:21.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:21 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:14:21 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:14:21 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:14:21 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:14:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  8 06:14:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1654234200' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  8 06:14:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  8 06:14:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1654234200' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  8 06:14:22 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v893: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct  8 06:14:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90005240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:22 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:14:22 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:14:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003540 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:22.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:22 np0005475493 nova_compute[262220]: 2025-10-08 10:14:22.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:23 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:23.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:23 np0005475493 nova_compute[262220]: 2025-10-08 10:14:23.591 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:14:23 np0005475493 nova_compute[262220]: 2025-10-08 10:14:23.591 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:14:23 np0005475493 nova_compute[262220]: 2025-10-08 10:14:23.621 2 DEBUG nova.compute.manager [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  8 06:14:23 np0005475493 nova_compute[262220]: 2025-10-08 10:14:23.733 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:14:23 np0005475493 nova_compute[262220]: 2025-10-08 10:14:23.733 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:14:23 np0005475493 nova_compute[262220]: 2025-10-08 10:14:23.741 2 DEBUG nova.virt.hardware [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  8 06:14:23 np0005475493 nova_compute[262220]: 2025-10-08 10:14:23.741 2 INFO nova.compute.claims [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  8 06:14:23 np0005475493 nova_compute[262220]: 2025-10-08 10:14:23.853 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:14:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:14:24 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v894: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Oct  8 06:14:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:14:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3158757203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:14:24 np0005475493 nova_compute[262220]: 2025-10-08 10:14:24.314 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:14:24 np0005475493 nova_compute[262220]: 2025-10-08 10:14:24.320 2 DEBUG nova.compute.provider_tree [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:14:24 np0005475493 nova_compute[262220]: 2025-10-08 10:14:24.335 2 DEBUG nova.scheduler.client.report [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:14:24 np0005475493 nova_compute[262220]: 2025-10-08 10:14:24.364 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:14:24 np0005475493 nova_compute[262220]: 2025-10-08 10:14:24.365 2 DEBUG nova.compute.manager [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  8 06:14:24 np0005475493 nova_compute[262220]: 2025-10-08 10:14:24.426 2 DEBUG nova.compute.manager [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  8 06:14:24 np0005475493 nova_compute[262220]: 2025-10-08 10:14:24.427 2 DEBUG nova.network.neutron [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  8 06:14:24 np0005475493 nova_compute[262220]: 2025-10-08 10:14:24.453 2 INFO nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  8 06:14:24 np0005475493 nova_compute[262220]: 2025-10-08 10:14:24.477 2 DEBUG nova.compute.manager [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  8 06:14:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90005240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:24 np0005475493 nova_compute[262220]: 2025-10-08 10:14:24.708 2 DEBUG nova.compute.manager [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  8 06:14:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:14:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:24.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:14:24 np0005475493 nova_compute[262220]: 2025-10-08 10:14:24.710 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  8 06:14:24 np0005475493 nova_compute[262220]: 2025-10-08 10:14:24.711 2 INFO nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Creating image(s)#033[00m
Oct  8 06:14:24 np0005475493 nova_compute[262220]: 2025-10-08 10:14:24.748 2 DEBUG nova.storage.rbd_utils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:14:24 np0005475493 nova_compute[262220]: 2025-10-08 10:14:24.783 2 DEBUG nova.storage.rbd_utils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:14:24 np0005475493 nova_compute[262220]: 2025-10-08 10:14:24.814 2 DEBUG nova.storage.rbd_utils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:14:24 np0005475493 nova_compute[262220]: 2025-10-08 10:14:24.820 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:14:24 np0005475493 nova_compute[262220]: 2025-10-08 10:14:24.900 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:14:24 np0005475493 nova_compute[262220]: 2025-10-08 10:14:24.902 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "3cde70359534d4758cf71011630bd1fb14a90c92" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:14:24 np0005475493 nova_compute[262220]: 2025-10-08 10:14:24.903 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "3cde70359534d4758cf71011630bd1fb14a90c92" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:14:24 np0005475493 nova_compute[262220]: 2025-10-08 10:14:24.903 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "3cde70359534d4758cf71011630bd1fb14a90c92" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:14:24 np0005475493 nova_compute[262220]: 2025-10-08 10:14:24.943 2 DEBUG nova.storage.rbd_utils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:14:24 np0005475493 nova_compute[262220]: 2025-10-08 10:14:24.948 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:14:25 np0005475493 nova_compute[262220]: 2025-10-08 10:14:25.022 2 DEBUG nova.policy [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd50b19166a7245e390a6e29682191263', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  8 06:14:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:25 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003540 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:25 np0005475493 nova_compute[262220]: 2025-10-08 10:14:25.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:25 np0005475493 nova_compute[262220]: 2025-10-08 10:14:25.251 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:14:25 np0005475493 nova_compute[262220]: 2025-10-08 10:14:25.338 2 DEBUG nova.storage.rbd_utils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] resizing rbd image ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  8 06:14:25 np0005475493 nova_compute[262220]: 2025-10-08 10:14:25.443 2 DEBUG nova.objects.instance [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'migration_context' on Instance uuid ea469a2e-bf09-495c-9b5e-02ad38d32d40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  8 06:14:25 np0005475493 nova_compute[262220]: 2025-10-08 10:14:25.458 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  8 06:14:25 np0005475493 nova_compute[262220]: 2025-10-08 10:14:25.458 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Ensure instance console log exists: /var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  8 06:14:25 np0005475493 nova_compute[262220]: 2025-10-08 10:14:25.458 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:14:25 np0005475493 nova_compute[262220]: 2025-10-08 10:14:25.459 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:14:25 np0005475493 nova_compute[262220]: 2025-10-08 10:14:25.459 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:14:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:14:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:25.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:14:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:14:25] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct  8 06:14:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:14:25] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct  8 06:14:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v895: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct  8 06:14:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:26 np0005475493 nova_compute[262220]: 2025-10-08 10:14:26.547 2 DEBUG nova.network.neutron [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Successfully created port: be4ec274-2a90-48e8-bd51-fd01f3c659da _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  8 06:14:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:26.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:14:27.150Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:14:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:14:27.150Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:14:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:14:27.150Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:14:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:27 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90005240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:14:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:27.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:14:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:27 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:14:27 np0005475493 nova_compute[262220]: 2025-10-08 10:14:27.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:28 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v896: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct  8 06:14:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003540 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:28 np0005475493 nova_compute[262220]: 2025-10-08 10:14:28.324 2 DEBUG nova.network.neutron [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Successfully updated port: be4ec274-2a90-48e8-bd51-fd01f3c659da _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  8 06:14:28 np0005475493 nova_compute[262220]: 2025-10-08 10:14:28.348 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:14:28 np0005475493 nova_compute[262220]: 2025-10-08 10:14:28.348 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquired lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:14:28 np0005475493 nova_compute[262220]: 2025-10-08 10:14:28.348 2 DEBUG nova.network.neutron [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  8 06:14:28 np0005475493 nova_compute[262220]: 2025-10-08 10:14:28.403 2 DEBUG nova.compute.manager [req-6719b070-2f01-4279-a5fe-5a610edfd379 req-0fbee54e-a6e0-4744-a94a-3c84cef15802 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-changed-be4ec274-2a90-48e8-bd51-fd01f3c659da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:14:28 np0005475493 nova_compute[262220]: 2025-10-08 10:14:28.404 2 DEBUG nova.compute.manager [req-6719b070-2f01-4279-a5fe-5a610edfd379 req-0fbee54e-a6e0-4744-a94a-3c84cef15802 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Refreshing instance network info cache due to event network-changed-be4ec274-2a90-48e8-bd51-fd01f3c659da. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  8 06:14:28 np0005475493 nova_compute[262220]: 2025-10-08 10:14:28.404 2 DEBUG oslo_concurrency.lockutils [req-6719b070-2f01-4279-a5fe-5a610edfd379 req-0fbee54e-a6e0-4744-a94a-3c84cef15802 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:14:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:28.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:14:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:29 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:29 np0005475493 nova_compute[262220]: 2025-10-08 10:14:29.337 2 DEBUG nova.network.neutron [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  8 06:14:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:29.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:30 np0005475493 nova_compute[262220]: 2025-10-08 10:14:30.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:30 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v897: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  8 06:14:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90005240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94003540 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:30.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:30 np0005475493 podman[274142]: 2025-10-08 10:14:30.919985725 +0000 UTC m=+0.076296448 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  8 06:14:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.286 2 DEBUG nova.network.neutron [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.303 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Releasing lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.303 2 DEBUG nova.compute.manager [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Instance network_info: |[{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.304 2 DEBUG oslo_concurrency.lockutils [req-6719b070-2f01-4279-a5fe-5a610edfd379 req-0fbee54e-a6e0-4744-a94a-3c84cef15802 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.304 2 DEBUG nova.network.neutron [req-6719b070-2f01-4279-a5fe-5a610edfd379 req-0fbee54e-a6e0-4744-a94a-3c84cef15802 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Refreshing network info cache for port be4ec274-2a90-48e8-bd51-fd01f3c659da _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.307 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Start _get_guest_xml network_info=[{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T10:08:49Z,direct_url=<?>,disk_format='qcow2',id=e5994bac-385d-4cfe-962e-386aa0559983,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9bebada0871a4efa9df99c6beff34c13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T10:08:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'device_type': 'disk', 'size': 0, 'image_id': 'e5994bac-385d-4cfe-962e-386aa0559983'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.314 2 WARNING nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.319 2 DEBUG nova.virt.libvirt.host [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.320 2 DEBUG nova.virt.libvirt.host [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.323 2 DEBUG nova.virt.libvirt.host [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.323 2 DEBUG nova.virt.libvirt.host [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.324 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.324 2 DEBUG nova.virt.hardware [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-08T10:08:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='461f98d6-ae65-4f86-8ae2-cc3cfaea2a46',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T10:08:49Z,direct_url=<?>,disk_format='qcow2',id=e5994bac-385d-4cfe-962e-386aa0559983,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9bebada0871a4efa9df99c6beff34c13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T10:08:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.324 2 DEBUG nova.virt.hardware [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.325 2 DEBUG nova.virt.hardware [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.325 2 DEBUG nova.virt.hardware [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.325 2 DEBUG nova.virt.hardware [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.325 2 DEBUG nova.virt.hardware [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.325 2 DEBUG nova.virt.hardware [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.326 2 DEBUG nova.virt.hardware [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.326 2 DEBUG nova.virt.hardware [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.326 2 DEBUG nova.virt.hardware [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.326 2 DEBUG nova.virt.hardware [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.330 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:14:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:14:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:31.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:14:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  8 06:14:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3828849259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.863 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.909 2 DEBUG nova.storage.rbd_utils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:14:31 np0005475493 nova_compute[262220]: 2025-10-08 10:14:31.915 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:14:31 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  8 06:14:31 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 2666 syncs, 4.09 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1892 writes, 5856 keys, 1892 commit groups, 1.0 writes per commit group, ingest: 6.53 MB, 0.01 MB/s#012Interval WAL: 1892 writes, 779 syncs, 2.43 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  8 06:14:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v898: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  8 06:14:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  8 06:14:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/652061004' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.380 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.383 2 DEBUG nova.virt.libvirt.vif [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-08T10:14:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1473882269',display_name='tempest-TestNetworkBasicOps-server-1473882269',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1473882269',id=6,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLR6MyJBOMYp2wYGaL6G2mEbJcrZztqNeLLTecSXpMa+10UdVkrFMZxX+qiOC0ccFZIJleCiHIYc6JXNFg7vRmgJ0JtTU6W+KAYc8u1JxRO51IwGJ30ByO68fx1sTbOqEg==',key_name='tempest-TestNetworkBasicOps-1715641436',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-b2owsx2f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T10:14:24Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=ea469a2e-bf09-495c-9b5e-02ad38d32d40,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.384 2 DEBUG nova.network.os_vif_util [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.385 2 DEBUG nova.network.os_vif_util [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:b0:e0,bridge_name='br-int',has_traffic_filtering=True,id=be4ec274-2a90-48e8-bd51-fd01f3c659da,network=Network(834a886f-bb33-49ed-b47e-ef0308a38e89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe4ec274-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.387 2 DEBUG nova.objects.instance [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'pci_devices' on Instance uuid ea469a2e-bf09-495c-9b5e-02ad38d32d40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.405 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] End _get_guest_xml xml=<domain type="kvm">
Oct  8 06:14:32 np0005475493 nova_compute[262220]:  <uuid>ea469a2e-bf09-495c-9b5e-02ad38d32d40</uuid>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:  <name>instance-00000006</name>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:  <memory>131072</memory>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:  <vcpu>1</vcpu>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:  <metadata>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <nova:name>tempest-TestNetworkBasicOps-server-1473882269</nova:name>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <nova:creationTime>2025-10-08 10:14:31</nova:creationTime>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <nova:flavor name="m1.nano">
Oct  8 06:14:32 np0005475493 nova_compute[262220]:        <nova:memory>128</nova:memory>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:        <nova:disk>1</nova:disk>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:        <nova:swap>0</nova:swap>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:        <nova:ephemeral>0</nova:ephemeral>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:        <nova:vcpus>1</nova:vcpus>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      </nova:flavor>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <nova:owner>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:        <nova:user uuid="d50b19166a7245e390a6e29682191263">tempest-TestNetworkBasicOps-139500885-project-member</nova:user>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:        <nova:project uuid="0edb2c85b88b4b168a4b2e8c5ed4a05c">tempest-TestNetworkBasicOps-139500885</nova:project>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      </nova:owner>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <nova:root type="image" uuid="e5994bac-385d-4cfe-962e-386aa0559983"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <nova:ports>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:        <nova:port uuid="be4ec274-2a90-48e8-bd51-fd01f3c659da">
Oct  8 06:14:32 np0005475493 nova_compute[262220]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:        </nova:port>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      </nova:ports>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    </nova:instance>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:  </metadata>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:  <sysinfo type="smbios">
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <system>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <entry name="manufacturer">RDO</entry>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <entry name="product">OpenStack Compute</entry>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <entry name="serial">ea469a2e-bf09-495c-9b5e-02ad38d32d40</entry>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <entry name="uuid">ea469a2e-bf09-495c-9b5e-02ad38d32d40</entry>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <entry name="family">Virtual Machine</entry>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    </system>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:  </sysinfo>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:  <os>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <boot dev="hd"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <smbios mode="sysinfo"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:  </os>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:  <features>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <acpi/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <apic/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <vmcoreinfo/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:  </features>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:  <clock offset="utc">
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <timer name="pit" tickpolicy="delay"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <timer name="hpet" present="no"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:  </clock>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:  <cpu mode="host-model" match="exact">
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <topology sockets="1" cores="1" threads="1"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:  </cpu>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:  <devices>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <disk type="network" device="disk">
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <driver type="raw" cache="none"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <source protocol="rbd" name="vms/ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk">
Oct  8 06:14:32 np0005475493 nova_compute[262220]:        <host name="192.168.122.100" port="6789"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:        <host name="192.168.122.102" port="6789"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:        <host name="192.168.122.101" port="6789"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      </source>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <auth username="openstack">
Oct  8 06:14:32 np0005475493 nova_compute[262220]:        <secret type="ceph" uuid="787292cc-8154-50c4-9e00-e9be3e817149"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      </auth>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <target dev="vda" bus="virtio"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    </disk>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <disk type="network" device="cdrom">
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <driver type="raw" cache="none"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <source protocol="rbd" name="vms/ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk.config">
Oct  8 06:14:32 np0005475493 nova_compute[262220]:        <host name="192.168.122.100" port="6789"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:        <host name="192.168.122.102" port="6789"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:        <host name="192.168.122.101" port="6789"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      </source>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <auth username="openstack">
Oct  8 06:14:32 np0005475493 nova_compute[262220]:        <secret type="ceph" uuid="787292cc-8154-50c4-9e00-e9be3e817149"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      </auth>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <target dev="sda" bus="sata"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    </disk>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <interface type="ethernet">
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <mac address="fa:16:3e:e6:b0:e0"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <model type="virtio"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <driver name="vhost" rx_queue_size="512"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <mtu size="1442"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <target dev="tapbe4ec274-2a"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    </interface>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <serial type="pty">
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <log file="/var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/console.log" append="off"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    </serial>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <video>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <model type="virtio"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    </video>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <input type="tablet" bus="usb"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <rng model="virtio">
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <backend model="random">/dev/urandom</backend>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    </rng>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <controller type="usb" index="0"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    <memballoon model="virtio">
Oct  8 06:14:32 np0005475493 nova_compute[262220]:      <stats period="10"/>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:    </memballoon>
Oct  8 06:14:32 np0005475493 nova_compute[262220]:  </devices>
Oct  8 06:14:32 np0005475493 nova_compute[262220]: </domain>
Oct  8 06:14:32 np0005475493 nova_compute[262220]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.407 2 DEBUG nova.compute.manager [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Preparing to wait for external event network-vif-plugged-be4ec274-2a90-48e8-bd51-fd01f3c659da prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.407 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.407 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.407 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.408 2 DEBUG nova.virt.libvirt.vif [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-08T10:14:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1473882269',display_name='tempest-TestNetworkBasicOps-server-1473882269',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1473882269',id=6,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLR6MyJBOMYp2wYGaL6G2mEbJcrZztqNeLLTecSXpMa+10UdVkrFMZxX+qiOC0ccFZIJleCiHIYc6JXNFg7vRmgJ0JtTU6W+KAYc8u1JxRO51IwGJ30ByO68fx1sTbOqEg==',key_name='tempest-TestNetworkBasicOps-1715641436',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-b2owsx2f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T10:14:24Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=ea469a2e-bf09-495c-9b5e-02ad38d32d40,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.408 2 DEBUG nova.network.os_vif_util [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.409 2 DEBUG nova.network.os_vif_util [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:b0:e0,bridge_name='br-int',has_traffic_filtering=True,id=be4ec274-2a90-48e8-bd51-fd01f3c659da,network=Network(834a886f-bb33-49ed-b47e-ef0308a38e89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe4ec274-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.409 2 DEBUG os_vif [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:b0:e0,bridge_name='br-int',has_traffic_filtering=True,id=be4ec274-2a90-48e8-bd51-fd01f3c659da,network=Network(834a886f-bb33-49ed-b47e-ef0308a38e89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe4ec274-2a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.411 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.412 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.417 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe4ec274-2a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.417 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe4ec274-2a, col_values=(('external_ids', {'iface-id': 'be4ec274-2a90-48e8-bd51-fd01f3c659da', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:b0:e0', 'vm-uuid': 'ea469a2e-bf09-495c-9b5e-02ad38d32d40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  8 06:14:32 np0005475493 NetworkManager[44872]: <info>  [1759918472.4220] manager: (tapbe4ec274-2a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.427 2 INFO os_vif [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:b0:e0,bridge_name='br-int',has_traffic_filtering=True,id=be4ec274-2a90-48e8-bd51-fd01f3c659da,network=Network(834a886f-bb33-49ed-b47e-ef0308a38e89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe4ec274-2a')#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.485 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.487 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.488 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No VIF found with MAC fa:16:3e:e6:b0:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.489 2 INFO nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Using config drive#033[00m
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.530 2 DEBUG nova.storage.rbd_utils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:14:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90005240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:14:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:32.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:14:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:14:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:14:32 np0005475493 nova_compute[262220]: 2025-10-08 10:14:32.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:33 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:33.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:33 np0005475493 nova_compute[262220]: 2025-10-08 10:14:33.489 2 INFO nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Creating config drive at /var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/disk.config#033[00m
Oct  8 06:14:33 np0005475493 nova_compute[262220]: 2025-10-08 10:14:33.493 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg3mw4hpq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:14:33 np0005475493 nova_compute[262220]: 2025-10-08 10:14:33.631 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg3mw4hpq" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:14:33 np0005475493 nova_compute[262220]: 2025-10-08 10:14:33.667 2 DEBUG nova.storage.rbd_utils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:14:33 np0005475493 nova_compute[262220]: 2025-10-08 10:14:33.671 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/disk.config ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:14:33 np0005475493 nova_compute[262220]: 2025-10-08 10:14:33.881 2 DEBUG oslo_concurrency.processutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/disk.config ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:14:33 np0005475493 nova_compute[262220]: 2025-10-08 10:14:33.883 2 INFO nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Deleting local config drive /var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/disk.config because it was imported into RBD.#033[00m
Oct  8 06:14:33 np0005475493 systemd[1]: Starting libvirt secret daemon...
Oct  8 06:14:33 np0005475493 systemd[1]: Started libvirt secret daemon.
Oct  8 06:14:34 np0005475493 kernel: tapbe4ec274-2a: entered promiscuous mode
Oct  8 06:14:34 np0005475493 NetworkManager[44872]: <info>  [1759918474.0074] manager: (tapbe4ec274-2a): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Oct  8 06:14:34 np0005475493 nova_compute[262220]: 2025-10-08 10:14:34.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:34 np0005475493 ovn_controller[153187]: 2025-10-08T10:14:34Z|00037|binding|INFO|Claiming lport be4ec274-2a90-48e8-bd51-fd01f3c659da for this chassis.
Oct  8 06:14:34 np0005475493 ovn_controller[153187]: 2025-10-08T10:14:34Z|00038|binding|INFO|be4ec274-2a90-48e8-bd51-fd01f3c659da: Claiming fa:16:3e:e6:b0:e0 10.100.0.3
Oct  8 06:14:34 np0005475493 nova_compute[262220]: 2025-10-08 10:14:34.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.066 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:b0:e0 10.100.0.3'], port_security=['fa:16:3e:e6:b0:e0 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ea469a2e-bf09-495c-9b5e-02ad38d32d40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-834a886f-bb33-49ed-b47e-ef0308a38e89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '13817d67-6af8-4060-9f0c-16a7fd8532c0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2eaf1a8f-1880-48d7-9974-4c1e9169efe5, chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], logical_port=be4ec274-2a90-48e8-bd51-fd01f3c659da) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.067 163175 INFO neutron.agent.ovn.metadata.agent [-] Port be4ec274-2a90-48e8-bd51-fd01f3c659da in datapath 834a886f-bb33-49ed-b47e-ef0308a38e89 bound to our chassis#033[00m
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.069 163175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 834a886f-bb33-49ed-b47e-ef0308a38e89#033[00m
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.086 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[5de2ab1a-ef6d-4f1c-8c1c-20ff9e68c1ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.087 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap834a886f-b1 in ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  8 06:14:34 np0005475493 systemd-machined[216030]: New machine qemu-2-instance-00000006.
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.090 267781 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap834a886f-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.091 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[47337ed7-4b78-439d-9c6f-6ed88c6cde3d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.092 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[683e143f-311f-4444-8d20-484b90e2758a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:14:34 np0005475493 systemd[1]: Started Virtual Machine qemu-2-instance-00000006.
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.114 163290 DEBUG oslo.privsep.daemon [-] privsep: reply[c1ae3106-f580-4530-99f0-1d6cd00856c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:14:34 np0005475493 nova_compute[262220]: 2025-10-08 10:14:34.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:34 np0005475493 ovn_controller[153187]: 2025-10-08T10:14:34Z|00039|binding|INFO|Setting lport be4ec274-2a90-48e8-bd51-fd01f3c659da ovn-installed in OVS
Oct  8 06:14:34 np0005475493 ovn_controller[153187]: 2025-10-08T10:14:34Z|00040|binding|INFO|Setting lport be4ec274-2a90-48e8-bd51-fd01f3c659da up in Southbound
Oct  8 06:14:34 np0005475493 nova_compute[262220]: 2025-10-08 10:14:34.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.138 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[da7ada3e-18a7-41e1-b1c3-b88e6dc893be]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:14:34 np0005475493 systemd-udevd[274322]: Network interface NamePolicy= disabled on kernel command line.
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:14:34 np0005475493 NetworkManager[44872]: <info>  [1759918474.1693] device (tapbe4ec274-2a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  8 06:14:34 np0005475493 NetworkManager[44872]: <info>  [1759918474.1709] device (tapbe4ec274-2a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.177 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[1ddc5b70-bde4-49ed-ac3a-95b45637b4d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:14:34 np0005475493 NetworkManager[44872]: <info>  [1759918474.1876] manager: (tap834a886f-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.186 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[71f1919f-0064-4bda-936d-470061e1201c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:14:34 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v899: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.229 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[0e281fc7-9b8f-4bc4-b3b7-638933d7d01b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.233 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[a8a9ba21-ffa5-41a4-b10e-a0c4413c8d26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:14:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:34 np0005475493 NetworkManager[44872]: <info>  [1759918474.2713] device (tap834a886f-b0): carrier: link connected
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.278 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[3663e998-3c9c-4088-9a66-d7aea36c704d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.304 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[45be398f-3275-4907-8361-f6bef3c9512e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap834a886f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:82:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443290, 'reachable_time': 36315, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274352, 'error': None, 'target': 'ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.324 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[6adc5d0c-de0f-4db4-a29c-c06a20d2592d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe16:82b6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 443290, 'tstamp': 443290}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274353, 'error': None, 'target': 'ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.346 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[aefe0c1b-1536-4087-bef6-ef40930dcdc3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap834a886f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:82:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443290, 'reachable_time': 36315, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274354, 'error': None, 'target': 'ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.368623) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918474368721, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2122, "num_deletes": 251, "total_data_size": 4184829, "memory_usage": 4252152, "flush_reason": "Manual Compaction"}
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.392 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[f92ebba2-02db-4ded-8f2d-9ab8900ba16d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918474400020, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 4063382, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24787, "largest_seqno": 26908, "table_properties": {"data_size": 4054034, "index_size": 5842, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19542, "raw_average_key_size": 20, "raw_value_size": 4035327, "raw_average_value_size": 4181, "num_data_blocks": 257, "num_entries": 965, "num_filter_entries": 965, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759918264, "oldest_key_time": 1759918264, "file_creation_time": 1759918474, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 31445 microseconds, and 12460 cpu microseconds.
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.400084) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 4063382 bytes OK
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.400112) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.401564) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.401579) EVENT_LOG_v1 {"time_micros": 1759918474401574, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.401606) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 4176252, prev total WAL file size 4176252, number of live WAL files 2.
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.402680) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(3968KB)], [56(11MB)]
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918474403274, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 16442724, "oldest_snapshot_seqno": -1}
Oct  8 06:14:34 np0005475493 nova_compute[262220]: 2025-10-08 10:14:34.470 2 DEBUG nova.compute.manager [req-3afe6bfa-060a-478f-832f-6cff0bcdfea9 req-2a69d28c-bf31-493c-918b-b92eb2157dfc 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-vif-plugged-be4ec274-2a90-48e8-bd51-fd01f3c659da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:14:34 np0005475493 nova_compute[262220]: 2025-10-08 10:14:34.471 2 DEBUG oslo_concurrency.lockutils [req-3afe6bfa-060a-478f-832f-6cff0bcdfea9 req-2a69d28c-bf31-493c-918b-b92eb2157dfc 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:14:34 np0005475493 nova_compute[262220]: 2025-10-08 10:14:34.471 2 DEBUG oslo_concurrency.lockutils [req-3afe6bfa-060a-478f-832f-6cff0bcdfea9 req-2a69d28c-bf31-493c-918b-b92eb2157dfc 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:14:34 np0005475493 nova_compute[262220]: 2025-10-08 10:14:34.472 2 DEBUG oslo_concurrency.lockutils [req-3afe6bfa-060a-478f-832f-6cff0bcdfea9 req-2a69d28c-bf31-493c-918b-b92eb2157dfc 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:14:34 np0005475493 nova_compute[262220]: 2025-10-08 10:14:34.472 2 DEBUG nova.compute.manager [req-3afe6bfa-060a-478f-832f-6cff0bcdfea9 req-2a69d28c-bf31-493c-918b-b92eb2157dfc 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Processing event network-vif-plugged-be4ec274-2a90-48e8-bd51-fd01f3c659da _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.473 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c3cf43-9e9c-4362-9406-ec72888cd737]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.475 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap834a886f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.475 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.476 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap834a886f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:14:34 np0005475493 nova_compute[262220]: 2025-10-08 10:14:34.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:34 np0005475493 NetworkManager[44872]: <info>  [1759918474.4796] manager: (tap834a886f-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Oct  8 06:14:34 np0005475493 kernel: tap834a886f-b0: entered promiscuous mode
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.490 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap834a886f-b0, col_values=(('external_ids', {'iface-id': 'f613d263-6ad2-4e23-84bc-b066c6b6b34a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:14:34 np0005475493 ovn_controller[153187]: 2025-10-08T10:14:34Z|00041|binding|INFO|Releasing lport f613d263-6ad2-4e23-84bc-b066c6b6b34a from this chassis (sb_readonly=0)
Oct  8 06:14:34 np0005475493 nova_compute[262220]: 2025-10-08 10:14:34.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.498 163175 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/834a886f-bb33-49ed-b47e-ef0308a38e89.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/834a886f-bb33-49ed-b47e-ef0308a38e89.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.500 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[788e2237-d1d7-41f7-9bf9-b0888795e7e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.501 163175 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: global
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]:    log         /dev/log local0 debug
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]:    log-tag     haproxy-metadata-proxy-834a886f-bb33-49ed-b47e-ef0308a38e89
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]:    user        root
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]:    group       root
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]:    maxconn     1024
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]:    pidfile     /var/lib/neutron/external/pids/834a886f-bb33-49ed-b47e-ef0308a38e89.pid.haproxy
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]:    daemon
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: defaults
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]:    log global
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]:    mode http
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]:    option httplog
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]:    option dontlognull
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]:    option http-server-close
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]:    option forwardfor
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]:    retries                 3
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]:    timeout http-request    30s
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]:    timeout connect         30s
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]:    timeout client          32s
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]:    timeout server          32s
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]:    timeout http-keep-alive 30s
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: listen listener
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]:    bind 169.254.169.254:80
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]:    server metadata /var/lib/neutron/metadata_proxy
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]:    http-request add-header X-OVN-Network-ID 834a886f-bb33-49ed-b47e-ef0308a38e89
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  8 06:14:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:34.503 163175 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89', 'env', 'PROCESS_TAG=haproxy-834a886f-bb33-49ed-b47e-ef0308a38e89', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/834a886f-bb33-49ed-b47e-ef0308a38e89.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5816 keys, 14317326 bytes, temperature: kUnknown
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918474505845, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 14317326, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14277846, "index_size": 23818, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14597, "raw_key_size": 147901, "raw_average_key_size": 25, "raw_value_size": 14172310, "raw_average_value_size": 2436, "num_data_blocks": 970, "num_entries": 5816, "num_filter_entries": 5816, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759918474, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.506105) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 14317326 bytes
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.507287) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.2 rd, 139.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 11.8 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 6334, records dropped: 518 output_compression: NoCompression
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.507304) EVENT_LOG_v1 {"time_micros": 1759918474507297, "job": 30, "event": "compaction_finished", "compaction_time_micros": 102640, "compaction_time_cpu_micros": 32688, "output_level": 6, "num_output_files": 1, "total_output_size": 14317326, "num_input_records": 6334, "num_output_records": 5816, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918474507988, "job": 30, "event": "table_file_deletion", "file_number": 58}
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918474510366, "job": 30, "event": "table_file_deletion", "file_number": 56}
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.402577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.510421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.510457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.510459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.510461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:14:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:14:34.510462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:14:34 np0005475493 nova_compute[262220]: 2025-10-08 10:14:34.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:34.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:34 np0005475493 podman[274428]: 2025-10-08 10:14:34.891130302 +0000 UTC m=+0.058209766 container create 2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  8 06:14:34 np0005475493 systemd[1]: Started libpod-conmon-2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9.scope.
Oct  8 06:14:34 np0005475493 podman[274428]: 2025-10-08 10:14:34.857256681 +0000 UTC m=+0.024336165 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  8 06:14:34 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:14:34 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/841b76c2441b0eb7f658de0d9799efa6ab00baf820e9b70f7311256c5c904ae8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  8 06:14:34 np0005475493 podman[274428]: 2025-10-08 10:14:34.985323326 +0000 UTC m=+0.152402820 container init 2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  8 06:14:34 np0005475493 podman[274428]: 2025-10-08 10:14:34.993985865 +0000 UTC m=+0.161065329 container start 2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  8 06:14:35 np0005475493 neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89[274443]: [NOTICE]   (274447) : New worker (274449) forked
Oct  8 06:14:35 np0005475493 neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89[274443]: [NOTICE]   (274447) : Loading success.
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.041 2 DEBUG nova.network.neutron [req-6719b070-2f01-4279-a5fe-5a610edfd379 req-0fbee54e-a6e0-4744-a94a-3c84cef15802 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updated VIF entry in instance network info cache for port be4ec274-2a90-48e8-bd51-fd01f3c659da. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.041 2 DEBUG nova.network.neutron [req-6719b070-2f01-4279-a5fe-5a610edfd379 req-0fbee54e-a6e0-4744-a94a-3c84cef15802 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.059 2 DEBUG oslo_concurrency.lockutils [req-6719b070-2f01-4279-a5fe-5a610edfd379 req-0fbee54e-a6e0-4744-a94a-3c84cef15802 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.116 2 DEBUG nova.compute.manager [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.118 2 DEBUG nova.virt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Emitting event <LifecycleEvent: 1759918475.1170862, ea469a2e-bf09-495c-9b5e-02ad38d32d40 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.118 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] VM Started (Lifecycle Event)#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.122 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.126 2 INFO nova.virt.libvirt.driver [-] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Instance spawned successfully.#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.127 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.139 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.143 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  8 06:14:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:35 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90005240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.271 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.271 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.272 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.272 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.273 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.273 2 DEBUG nova.virt.libvirt.driver [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.277 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.278 2 DEBUG nova.virt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Emitting event <LifecycleEvent: 1759918475.1184063, ea469a2e-bf09-495c-9b5e-02ad38d32d40 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.278 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] VM Paused (Lifecycle Event)#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.302 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.306 2 DEBUG nova.virt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Emitting event <LifecycleEvent: 1759918475.1213126, ea469a2e-bf09-495c-9b5e-02ad38d32d40 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.306 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] VM Resumed (Lifecycle Event)#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.334 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.337 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.352 2 INFO nova.compute.manager [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Took 10.64 seconds to spawn the instance on the hypervisor.#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.353 2 DEBUG nova.compute.manager [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.361 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  8 06:14:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:14:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:35.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:14:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:14:35] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.736 2 INFO nova.compute.manager [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Took 12.04 seconds to build instance.#033[00m
Oct  8 06:14:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:14:35] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  8 06:14:35 np0005475493 nova_compute[262220]: 2025-10-08 10:14:35.985 2 DEBUG oslo_concurrency.lockutils [None req-13958c94-4541-45a7-a57b-41b116fd19e1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.393s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:14:36 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v900: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct  8 06:14:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:36 np0005475493 nova_compute[262220]: 2025-10-08 10:14:36.554 2 DEBUG nova.compute.manager [req-74f13856-d5f3-4402-a60e-e31749374d03 req-5f7b899e-1f3c-40c8-8a24-298e0935669b 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-vif-plugged-be4ec274-2a90-48e8-bd51-fd01f3c659da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:14:36 np0005475493 nova_compute[262220]: 2025-10-08 10:14:36.554 2 DEBUG oslo_concurrency.lockutils [req-74f13856-d5f3-4402-a60e-e31749374d03 req-5f7b899e-1f3c-40c8-8a24-298e0935669b 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:14:36 np0005475493 nova_compute[262220]: 2025-10-08 10:14:36.554 2 DEBUG oslo_concurrency.lockutils [req-74f13856-d5f3-4402-a60e-e31749374d03 req-5f7b899e-1f3c-40c8-8a24-298e0935669b 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:14:36 np0005475493 nova_compute[262220]: 2025-10-08 10:14:36.555 2 DEBUG oslo_concurrency.lockutils [req-74f13856-d5f3-4402-a60e-e31749374d03 req-5f7b899e-1f3c-40c8-8a24-298e0935669b 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:14:36 np0005475493 nova_compute[262220]: 2025-10-08 10:14:36.555 2 DEBUG nova.compute.manager [req-74f13856-d5f3-4402-a60e-e31749374d03 req-5f7b899e-1f3c-40c8-8a24-298e0935669b 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] No waiting events found dispatching network-vif-plugged-be4ec274-2a90-48e8-bd51-fd01f3c659da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  8 06:14:36 np0005475493 nova_compute[262220]: 2025-10-08 10:14:36.555 2 WARNING nova.compute.manager [req-74f13856-d5f3-4402-a60e-e31749374d03 req-5f7b899e-1f3c-40c8-8a24-298e0935669b 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received unexpected event network-vif-plugged-be4ec274-2a90-48e8-bd51-fd01f3c659da for instance with vm_state active and task_state None.#033[00m
Oct  8 06:14:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:36.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:14:37.151Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:14:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:37 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:37 np0005475493 nova_compute[262220]: 2025-10-08 10:14:37.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:37.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:37 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:14:37 np0005475493 nova_compute[262220]: 2025-10-08 10:14:37.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:38 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v901: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct  8 06:14:38 np0005475493 ovn_controller[153187]: 2025-10-08T10:14:38Z|00042|binding|INFO|Releasing lport f613d263-6ad2-4e23-84bc-b066c6b6b34a from this chassis (sb_readonly=0)
Oct  8 06:14:38 np0005475493 NetworkManager[44872]: <info>  [1759918478.2198] manager: (patch-br-int-to-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Oct  8 06:14:38 np0005475493 NetworkManager[44872]: <info>  [1759918478.2207] manager: (patch-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Oct  8 06:14:38 np0005475493 nova_compute[262220]: 2025-10-08 10:14:38.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:38 np0005475493 ovn_controller[153187]: 2025-10-08T10:14:38Z|00043|binding|INFO|Releasing lport f613d263-6ad2-4e23-84bc-b066c6b6b34a from this chassis (sb_readonly=0)
Oct  8 06:14:38 np0005475493 nova_compute[262220]: 2025-10-08 10:14:38.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:38 np0005475493 nova_compute[262220]: 2025-10-08 10:14:38.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90005240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:38 np0005475493 nova_compute[262220]: 2025-10-08 10:14:38.575 2 DEBUG nova.compute.manager [req-8aa493a7-11bf-43a7-aec9-2ef6703fdbb0 req-b356d5b9-4e70-40c6-a136-5f3a7677b7f8 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-changed-be4ec274-2a90-48e8-bd51-fd01f3c659da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:14:38 np0005475493 nova_compute[262220]: 2025-10-08 10:14:38.575 2 DEBUG nova.compute.manager [req-8aa493a7-11bf-43a7-aec9-2ef6703fdbb0 req-b356d5b9-4e70-40c6-a136-5f3a7677b7f8 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Refreshing instance network info cache due to event network-changed-be4ec274-2a90-48e8-bd51-fd01f3c659da. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  8 06:14:38 np0005475493 nova_compute[262220]: 2025-10-08 10:14:38.576 2 DEBUG oslo_concurrency.lockutils [req-8aa493a7-11bf-43a7-aec9-2ef6703fdbb0 req-b356d5b9-4e70-40c6-a136-5f3a7677b7f8 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:14:38 np0005475493 nova_compute[262220]: 2025-10-08 10:14:38.576 2 DEBUG oslo_concurrency.lockutils [req-8aa493a7-11bf-43a7-aec9-2ef6703fdbb0 req-b356d5b9-4e70-40c6-a136-5f3a7677b7f8 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:14:38 np0005475493 nova_compute[262220]: 2025-10-08 10:14:38.576 2 DEBUG nova.network.neutron [req-8aa493a7-11bf-43a7-aec9-2ef6703fdbb0 req-b356d5b9-4e70-40c6-a136-5f3a7677b7f8 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Refreshing network info cache for port be4ec274-2a90-48e8-bd51-fd01f3c659da _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  8 06:14:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:38.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:14:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:39 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:39.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v902: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct  8 06:14:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90005240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:14:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:40.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:14:40 np0005475493 nova_compute[262220]: 2025-10-08 10:14:40.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:14:40 np0005475493 nova_compute[262220]: 2025-10-08 10:14:40.927 2 DEBUG nova.network.neutron [req-8aa493a7-11bf-43a7-aec9-2ef6703fdbb0 req-b356d5b9-4e70-40c6-a136-5f3a7677b7f8 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updated VIF entry in instance network info cache for port be4ec274-2a90-48e8-bd51-fd01f3c659da. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  8 06:14:40 np0005475493 nova_compute[262220]: 2025-10-08 10:14:40.928 2 DEBUG nova.network.neutron [req-8aa493a7-11bf-43a7-aec9-2ef6703fdbb0 req-b356d5b9-4e70-40c6-a136-5f3a7677b7f8 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:14:40 np0005475493 nova_compute[262220]: 2025-10-08 10:14:40.946 2 DEBUG oslo_concurrency.lockutils [req-8aa493a7-11bf-43a7-aec9-2ef6703fdbb0 req-b356d5b9-4e70-40c6-a136-5f3a7677b7f8 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:14:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:41 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:41.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:41 np0005475493 nova_compute[262220]: 2025-10-08 10:14:41.897 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:14:41 np0005475493 podman[274466]: 2025-10-08 10:14:41.930015264 +0000 UTC m=+0.089444412 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Oct  8 06:14:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v903: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct  8 06:14:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:42 np0005475493 nova_compute[262220]: 2025-10-08 10:14:42.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:14:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:42.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:14:42 np0005475493 nova_compute[262220]: 2025-10-08 10:14:42.882 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:14:42 np0005475493 nova_compute[262220]: 2025-10-08 10:14:42.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:14:42 np0005475493 nova_compute[262220]: 2025-10-08 10:14:42.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:14:42 np0005475493 nova_compute[262220]: 2025-10-08 10:14:42.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  8 06:14:42 np0005475493 nova_compute[262220]: 2025-10-08 10:14:42.906 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  8 06:14:42 np0005475493 nova_compute[262220]: 2025-10-08 10:14:42.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:43 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90005240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:43.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:43 np0005475493 nova_compute[262220]: 2025-10-08 10:14:43.906 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:14:43 np0005475493 nova_compute[262220]: 2025-10-08 10:14:43.932 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:14:43 np0005475493 nova_compute[262220]: 2025-10-08 10:14:43.932 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:14:43 np0005475493 nova_compute[262220]: 2025-10-08 10:14:43.933 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:14:43 np0005475493 nova_compute[262220]: 2025-10-08 10:14:43.933 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:14:43 np0005475493 nova_compute[262220]: 2025-10-08 10:14:43.933 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:14:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:14:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v904: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct  8 06:14:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94004a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:14:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1149238324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:14:44 np0005475493 nova_compute[262220]: 2025-10-08 10:14:44.420 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:14:44 np0005475493 nova_compute[262220]: 2025-10-08 10:14:44.485 2 DEBUG nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  8 06:14:44 np0005475493 nova_compute[262220]: 2025-10-08 10:14:44.486 2 DEBUG nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  8 06:14:44 np0005475493 nova_compute[262220]: 2025-10-08 10:14:44.644 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:14:44 np0005475493 nova_compute[262220]: 2025-10-08 10:14:44.645 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4438MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:14:44 np0005475493 nova_compute[262220]: 2025-10-08 10:14:44.646 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:14:44 np0005475493 nova_compute[262220]: 2025-10-08 10:14:44.646 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:14:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:44.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:44 np0005475493 nova_compute[262220]: 2025-10-08 10:14:44.892 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Instance ea469a2e-bf09-495c-9b5e-02ad38d32d40 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  8 06:14:44 np0005475493 nova_compute[262220]: 2025-10-08 10:14:44.893 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:14:44 np0005475493 nova_compute[262220]: 2025-10-08 10:14:44.893 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:14:44 np0005475493 nova_compute[262220]: 2025-10-08 10:14:44.961 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing inventories for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  8 06:14:45 np0005475493 nova_compute[262220]: 2025-10-08 10:14:45.020 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating ProviderTree inventory for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  8 06:14:45 np0005475493 nova_compute[262220]: 2025-10-08 10:14:45.021 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating inventory in ProviderTree for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  8 06:14:45 np0005475493 nova_compute[262220]: 2025-10-08 10:14:45.036 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing aggregate associations for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  8 06:14:45 np0005475493 nova_compute[262220]: 2025-10-08 10:14:45.068 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing trait associations for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2, traits: HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,HW_CPU_X86_SSE2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  8 06:14:45 np0005475493 nova_compute[262220]: 2025-10-08 10:14:45.098 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:14:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:45 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:45.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:45 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:14:45 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3390333083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:14:45 np0005475493 nova_compute[262220]: 2025-10-08 10:14:45.607 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:14:45 np0005475493 nova_compute[262220]: 2025-10-08 10:14:45.613 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:14:45 np0005475493 nova_compute[262220]: 2025-10-08 10:14:45.628 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:14:45 np0005475493 nova_compute[262220]: 2025-10-08 10:14:45.658 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:14:45 np0005475493 nova_compute[262220]: 2025-10-08 10:14:45.659 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:14:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:14:45] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Oct  8 06:14:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:14:45] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Oct  8 06:14:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v905: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 69 op/s
Oct  8 06:14:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90005240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:46 np0005475493 nova_compute[262220]: 2025-10-08 10:14:46.640 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:14:46 np0005475493 nova_compute[262220]: 2025-10-08 10:14:46.659 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:14:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94005350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:14:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:46.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:14:46 np0005475493 nova_compute[262220]: 2025-10-08 10:14:46.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:14:46 np0005475493 nova_compute[262220]: 2025-10-08 10:14:46.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  8 06:14:46 np0005475493 nova_compute[262220]: 2025-10-08 10:14:46.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  8 06:14:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:14:47.152Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:14:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:14:47.152Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:14:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:47 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:47 np0005475493 nova_compute[262220]: 2025-10-08 10:14:47.324 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:14:47 np0005475493 nova_compute[262220]: 2025-10-08 10:14:47.325 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquired lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:14:47 np0005475493 nova_compute[262220]: 2025-10-08 10:14:47.325 2 DEBUG nova.network.neutron [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  8 06:14:47 np0005475493 nova_compute[262220]: 2025-10-08 10:14:47.325 2 DEBUG nova.objects.instance [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ea469a2e-bf09-495c-9b5e-02ad38d32d40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  8 06:14:47 np0005475493 nova_compute[262220]: 2025-10-08 10:14:47.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:47.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:14:47
Oct  8 06:14:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:14:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:14:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'volumes', '.nfs', '.rgw.root', 'backups', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'images', '.mgr']
Oct  8 06:14:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:14:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:14:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:14:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:47 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:14:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:14:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:14:47 np0005475493 nova_compute[262220]: 2025-10-08 10:14:47.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:48 np0005475493 ovn_controller[153187]: 2025-10-08T10:14:48Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:b0:e0 10.100.0.3
Oct  8 06:14:48 np0005475493 ovn_controller[153187]: 2025-10-08T10:14:48Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:b0:e0 10.100.0.3
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v906: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 69 op/s
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:14:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:14:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac90005240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:48 np0005475493 nova_compute[262220]: 2025-10-08 10:14:48.705 2 DEBUG nova.network.neutron [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:14:48 np0005475493 nova_compute[262220]: 2025-10-08 10:14:48.731 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Releasing lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:14:48 np0005475493 nova_compute[262220]: 2025-10-08 10:14:48.732 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  8 06:14:48 np0005475493 nova_compute[262220]: 2025-10-08 10:14:48.732 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:14:48 np0005475493 nova_compute[262220]: 2025-10-08 10:14:48.733 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:14:48 np0005475493 nova_compute[262220]: 2025-10-08 10:14:48.733 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:14:48 np0005475493 nova_compute[262220]: 2025-10-08 10:14:48.733 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  8 06:14:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:14:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:48.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:14:48 np0005475493 podman[274571]: 2025-10-08 10:14:48.902051861 +0000 UTC m=+0.060436068 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  8 06:14:48 np0005475493 podman[274570]: 2025-10-08 10:14:48.917100776 +0000 UTC m=+0.076756474 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  8 06:14:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:14:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:49 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94005350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:49.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:50 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v907: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct  8 06:14:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:14:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:50.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:14:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:51 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c001ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:14:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:51.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:14:51 np0005475493 nova_compute[262220]: 2025-10-08 10:14:51.920 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:14:51 np0005475493 nova_compute[262220]: 2025-10-08 10:14:51.940 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Triggering sync for uuid ea469a2e-bf09-495c-9b5e-02ad38d32d40 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  8 06:14:51 np0005475493 nova_compute[262220]: 2025-10-08 10:14:51.940 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:14:51 np0005475493 nova_compute[262220]: 2025-10-08 10:14:51.941 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:14:51 np0005475493 nova_compute[262220]: 2025-10-08 10:14:51.979 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:14:52 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v908: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  8 06:14:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94005350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:52 np0005475493 nova_compute[262220]: 2025-10-08 10:14:52.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:52.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:52 np0005475493 nova_compute[262220]: 2025-10-08 10:14:52.987 2 INFO nova.compute.manager [None req-7f625ccc-5c89-4d62-996c-ca423229ac60 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Get console output#033[00m
Oct  8 06:14:52 np0005475493 nova_compute[262220]: 2025-10-08 10:14:52.993 631 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  8 06:14:53 np0005475493 nova_compute[262220]: 2025-10-08 10:14:52.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:53 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:14:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:53.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:14:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:14:54 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v909: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  8 06:14:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c001ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c001ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:54.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:55 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:55 np0005475493 nova_compute[262220]: 2025-10-08 10:14:55.199 2 DEBUG oslo_concurrency.lockutils [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "interface-ea469a2e-bf09-495c-9b5e-02ad38d32d40-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:14:55 np0005475493 nova_compute[262220]: 2025-10-08 10:14:55.199 2 DEBUG oslo_concurrency.lockutils [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "interface-ea469a2e-bf09-495c-9b5e-02ad38d32d40-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:14:55 np0005475493 nova_compute[262220]: 2025-10-08 10:14:55.200 2 DEBUG nova.objects.instance [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'flavor' on Instance uuid ea469a2e-bf09-495c-9b5e-02ad38d32d40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  8 06:14:55 np0005475493 nova_compute[262220]: 2025-10-08 10:14:55.485 2 DEBUG nova.objects.instance [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'pci_requests' on Instance uuid ea469a2e-bf09-495c-9b5e-02ad38d32d40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  8 06:14:55 np0005475493 nova_compute[262220]: 2025-10-08 10:14:55.497 2 DEBUG nova.network.neutron [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  8 06:14:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:55.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:55 np0005475493 nova_compute[262220]: 2025-10-08 10:14:55.660 2 DEBUG nova.policy [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd50b19166a7245e390a6e29682191263', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  8 06:14:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:14:55] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Oct  8 06:14:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:14:55] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Oct  8 06:14:56 np0005475493 nova_compute[262220]: 2025-10-08 10:14:56.129 2 DEBUG nova.network.neutron [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Successfully created port: 79d28498-fe9d-49dc-ad2c-bde432b239db _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  8 06:14:56 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v910: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  8 06:14:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70003e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:14:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:56.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:14:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:14:57.154Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:14:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:14:57.155Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:14:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:57 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:57.413 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:14:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:57.414 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:14:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:14:57.415 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:14:57 np0005475493 nova_compute[262220]: 2025-10-08 10:14:57.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:57.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:57 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:14:58 np0005475493 nova_compute[262220]: 2025-10-08 10:14:58.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:14:58 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v911: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  8 06:14:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94005350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:14:58.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:14:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:14:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:14:59 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70003e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:14:59 np0005475493 nova_compute[262220]: 2025-10-08 10:14:59.337 2 DEBUG nova.network.neutron [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Successfully updated port: 79d28498-fe9d-49dc-ad2c-bde432b239db _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  8 06:14:59 np0005475493 nova_compute[262220]: 2025-10-08 10:14:59.402 2 DEBUG oslo_concurrency.lockutils [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:14:59 np0005475493 nova_compute[262220]: 2025-10-08 10:14:59.402 2 DEBUG oslo_concurrency.lockutils [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquired lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:14:59 np0005475493 nova_compute[262220]: 2025-10-08 10:14:59.403 2 DEBUG nova.network.neutron [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  8 06:14:59 np0005475493 nova_compute[262220]: 2025-10-08 10:14:59.436 2 DEBUG nova.compute.manager [req-eb6a668c-e2d8-4e4e-bc69-70e5d18e887d req-e553ac48-3e9d-4c34-87f2-509382fa0272 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-changed-79d28498-fe9d-49dc-ad2c-bde432b239db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:14:59 np0005475493 nova_compute[262220]: 2025-10-08 10:14:59.438 2 DEBUG nova.compute.manager [req-eb6a668c-e2d8-4e4e-bc69-70e5d18e887d req-e553ac48-3e9d-4c34-87f2-509382fa0272 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Refreshing instance network info cache due to event network-changed-79d28498-fe9d-49dc-ad2c-bde432b239db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  8 06:14:59 np0005475493 nova_compute[262220]: 2025-10-08 10:14:59.439 2 DEBUG oslo_concurrency.lockutils [req-eb6a668c-e2d8-4e4e-bc69-70e5d18e887d req-e553ac48-3e9d-4c34-87f2-509382fa0272 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:14:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:14:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:14:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:14:59.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:00 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v912: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  8 06:15:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94005350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:15:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:00.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:15:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:01 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:01.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:01 np0005475493 podman[274628]: 2025-10-08 10:15:01.889708011 +0000 UTC m=+0.054823607 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  8 06:15:02 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v913: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 1 op/s
Oct  8 06:15:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70003e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:02 np0005475493 nova_compute[262220]: 2025-10-08 10:15:02.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:15:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:02.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:15:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:15:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:03 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94005370 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.260 2 DEBUG nova.network.neutron [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.287 2 DEBUG oslo_concurrency.lockutils [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Releasing lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.288 2 DEBUG oslo_concurrency.lockutils [req-eb6a668c-e2d8-4e4e-bc69-70e5d18e887d req-e553ac48-3e9d-4c34-87f2-509382fa0272 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.288 2 DEBUG nova.network.neutron [req-eb6a668c-e2d8-4e4e-bc69-70e5d18e887d req-e553ac48-3e9d-4c34-87f2-509382fa0272 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Refreshing network info cache for port 79d28498-fe9d-49dc-ad2c-bde432b239db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.292 2 DEBUG nova.virt.libvirt.vif [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-08T10:14:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1473882269',display_name='tempest-TestNetworkBasicOps-server-1473882269',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1473882269',id=6,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLR6MyJBOMYp2wYGaL6G2mEbJcrZztqNeLLTecSXpMa+10UdVkrFMZxX+qiOC0ccFZIJleCiHIYc6JXNFg7vRmgJ0JtTU6W+KAYc8u1JxRO51IwGJ30ByO68fx1sTbOqEg==',key_name='tempest-TestNetworkBasicOps-1715641436',keypairs=<?>,launch_index=0,launched_at=2025-10-08T10:14:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-b2owsx2f',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T10:14:35Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=ea469a2e-bf09-495c-9b5e-02ad38d32d40,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.292 2 DEBUG nova.network.os_vif_util [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.294 2 DEBUG nova.network.os_vif_util [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:4d:66,bridge_name='br-int',has_traffic_filtering=True,id=79d28498-fe9d-49dc-ad2c-bde432b239db,network=Network(0a28a475-c59d-4526-93af-b8af40052e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d28498-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.294 2 DEBUG os_vif [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:4d:66,bridge_name='br-int',has_traffic_filtering=True,id=79d28498-fe9d-49dc-ad2c-bde432b239db,network=Network(0a28a475-c59d-4526-93af-b8af40052e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d28498-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.296 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.296 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.304 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79d28498-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.304 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap79d28498-fe, col_values=(('external_ids', {'iface-id': '79d28498-fe9d-49dc-ad2c-bde432b239db', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:4d:66', 'vm-uuid': 'ea469a2e-bf09-495c-9b5e-02ad38d32d40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:03 np0005475493 NetworkManager[44872]: <info>  [1759918503.3097] manager: (tap79d28498-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.318 2 INFO os_vif [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:4d:66,bridge_name='br-int',has_traffic_filtering=True,id=79d28498-fe9d-49dc-ad2c-bde432b239db,network=Network(0a28a475-c59d-4526-93af-b8af40052e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d28498-fe')#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.319 2 DEBUG nova.virt.libvirt.vif [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-08T10:14:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1473882269',display_name='tempest-TestNetworkBasicOps-server-1473882269',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1473882269',id=6,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLR6MyJBOMYp2wYGaL6G2mEbJcrZztqNeLLTecSXpMa+10UdVkrFMZxX+qiOC0ccFZIJleCiHIYc6JXNFg7vRmgJ0JtTU6W+KAYc8u1JxRO51IwGJ30ByO68fx1sTbOqEg==',key_name='tempest-TestNetworkBasicOps-1715641436',keypairs=<?>,launch_index=0,launched_at=2025-10-08T10:14:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-b2owsx2f',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T10:14:35Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=ea469a2e-bf09-495c-9b5e-02ad38d32d40,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.320 2 DEBUG nova.network.os_vif_util [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.320 2 DEBUG nova.network.os_vif_util [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:4d:66,bridge_name='br-int',has_traffic_filtering=True,id=79d28498-fe9d-49dc-ad2c-bde432b239db,network=Network(0a28a475-c59d-4526-93af-b8af40052e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d28498-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.324 2 DEBUG nova.virt.libvirt.guest [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] attach device xml: <interface type="ethernet">
Oct  8 06:15:03 np0005475493 nova_compute[262220]:  <mac address="fa:16:3e:40:4d:66"/>
Oct  8 06:15:03 np0005475493 nova_compute[262220]:  <model type="virtio"/>
Oct  8 06:15:03 np0005475493 nova_compute[262220]:  <driver name="vhost" rx_queue_size="512"/>
Oct  8 06:15:03 np0005475493 nova_compute[262220]:  <mtu size="1442"/>
Oct  8 06:15:03 np0005475493 nova_compute[262220]:  <target dev="tap79d28498-fe"/>
Oct  8 06:15:03 np0005475493 nova_compute[262220]: </interface>
Oct  8 06:15:03 np0005475493 nova_compute[262220]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  8 06:15:03 np0005475493 kernel: tap79d28498-fe: entered promiscuous mode
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:03 np0005475493 NetworkManager[44872]: <info>  [1759918503.3482] manager: (tap79d28498-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Oct  8 06:15:03 np0005475493 ovn_controller[153187]: 2025-10-08T10:15:03Z|00044|binding|INFO|Claiming lport 79d28498-fe9d-49dc-ad2c-bde432b239db for this chassis.
Oct  8 06:15:03 np0005475493 ovn_controller[153187]: 2025-10-08T10:15:03Z|00045|binding|INFO|79d28498-fe9d-49dc-ad2c-bde432b239db: Claiming fa:16:3e:40:4d:66 10.100.0.23
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.389 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:4d:66 10.100.0.23'], port_security=['fa:16:3e:40:4d:66 10.100.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': 'ea469a2e-bf09-495c-9b5e-02ad38d32d40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a28a475-c59d-4526-93af-b8af40052e5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7f5008fb-e9a5-4fed-867f-172652283a31', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f6ba97cc-1c15-47ba-aa89-c964fcf23523, chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], logical_port=79d28498-fe9d-49dc-ad2c-bde432b239db) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.390 163175 INFO neutron.agent.ovn.metadata.agent [-] Port 79d28498-fe9d-49dc-ad2c-bde432b239db in datapath 0a28a475-c59d-4526-93af-b8af40052e5c bound to our chassis#033[00m
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.391 163175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0a28a475-c59d-4526-93af-b8af40052e5c#033[00m
Oct  8 06:15:03 np0005475493 systemd-udevd[274656]: Network interface NamePolicy= disabled on kernel command line.
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.404 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[82fb71ff-2698-4baf-97de-816e3a2c19e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.406 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0a28a475-c1 in ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.408 267781 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0a28a475-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.408 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[21d6f22f-341a-4800-932d-5d7a1273978e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.409 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[6b89fb56-6119-45e8-85f1-2d113c79b673]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:15:03 np0005475493 NetworkManager[44872]: <info>  [1759918503.4105] device (tap79d28498-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  8 06:15:03 np0005475493 NetworkManager[44872]: <info>  [1759918503.4120] device (tap79d28498-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:03 np0005475493 ovn_controller[153187]: 2025-10-08T10:15:03Z|00046|binding|INFO|Setting lport 79d28498-fe9d-49dc-ad2c-bde432b239db ovn-installed in OVS
Oct  8 06:15:03 np0005475493 ovn_controller[153187]: 2025-10-08T10:15:03Z|00047|binding|INFO|Setting lport 79d28498-fe9d-49dc-ad2c-bde432b239db up in Southbound
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.425 163290 DEBUG oslo.privsep.daemon [-] privsep: reply[95dc8cc8-f0c7-4be5-86a2-0ed2f86145e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.439 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[1d0a5081-248c-4b65-b54d-68665660e67b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.474 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[9f2fc8d8-5713-4d45-aded-88f9c17608b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:15:03 np0005475493 NetworkManager[44872]: <info>  [1759918503.4804] manager: (tap0a28a475-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/41)
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.480 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[163d6405-13bd-4593-b59f-8953f8c537a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:15:03 np0005475493 systemd-udevd[274660]: Network interface NamePolicy= disabled on kernel command line.
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.515 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[54ef7e95-ffd5-4e61-b254-81edb31ca074]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.520 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[bf42bf6f-cc63-4879-9963-eee14eb5a69b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:15:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:03.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:03 np0005475493 NetworkManager[44872]: <info>  [1759918503.5468] device (tap0a28a475-c0): carrier: link connected
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.554 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[457d7982-2c02-42cb-9515-563cab084b97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.579 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[10855808-d422-415e-863e-5310ef749217]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a28a475-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c1:1f:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446217, 'reachable_time': 31275, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274683, 'error': None, 'target': 'ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.602 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[2d26ba25-9e79-4149-b272-6bb39a8a495e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec1:1f72'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 446217, 'tstamp': 446217}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274684, 'error': None, 'target': 'ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.627 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[06527ae1-1f19-4d31-8859-1b6564aae41b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a28a475-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c1:1f:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446217, 'reachable_time': 31275, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274685, 'error': None, 'target': 'ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.673 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[5eea7432-d8f5-41f5-aaa3-97ccfcc7e9de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.746 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[19875fb4-dcb6-4b5f-8687-6f8adc33c7e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.748 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a28a475-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.748 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.749 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a28a475-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:15:03 np0005475493 kernel: tap0a28a475-c0: entered promiscuous mode
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:03 np0005475493 NetworkManager[44872]: <info>  [1759918503.7518] manager: (tap0a28a475-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.754 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0a28a475-c0, col_values=(('external_ids', {'iface-id': '5250d729-6010-4688-85e3-ca6a96907e0d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:03 np0005475493 ovn_controller[153187]: 2025-10-08T10:15:03Z|00048|binding|INFO|Releasing lport 5250d729-6010-4688-85e3-ca6a96907e0d from this chassis (sb_readonly=0)
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.771 163175 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0a28a475-c59d-4526-93af-b8af40052e5c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0a28a475-c59d-4526-93af-b8af40052e5c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.772 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[f3fd76ce-dc3a-40b9-b837-665108781cb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.773 163175 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: global
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]:    log         /dev/log local0 debug
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]:    log-tag     haproxy-metadata-proxy-0a28a475-c59d-4526-93af-b8af40052e5c
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]:    user        root
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]:    group       root
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]:    maxconn     1024
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]:    pidfile     /var/lib/neutron/external/pids/0a28a475-c59d-4526-93af-b8af40052e5c.pid.haproxy
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]:    daemon
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: defaults
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]:    log global
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]:    mode http
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]:    option httplog
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]:    option dontlognull
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]:    option http-server-close
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]:    option forwardfor
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]:    retries                 3
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]:    timeout http-request    30s
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]:    timeout connect         30s
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]:    timeout client          32s
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]:    timeout server          32s
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]:    timeout http-keep-alive 30s
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: listen listener
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]:    bind 169.254.169.254:80
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]:    server metadata /var/lib/neutron/metadata_proxy
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]:    http-request add-header X-OVN-Network-ID 0a28a475-c59d-4526-93af-b8af40052e5c
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  8 06:15:03 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:03.774 163175 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c', 'env', 'PROCESS_TAG=haproxy-0a28a475-c59d-4526-93af-b8af40052e5c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0a28a475-c59d-4526-93af-b8af40052e5c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.872 2 DEBUG nova.virt.libvirt.driver [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.873 2 DEBUG nova.virt.libvirt.driver [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.873 2 DEBUG nova.virt.libvirt.driver [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No VIF found with MAC fa:16:3e:e6:b0:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.873 2 DEBUG nova.virt.libvirt.driver [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No VIF found with MAC fa:16:3e:40:4d:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  8 06:15:03 np0005475493 nova_compute[262220]: 2025-10-08 10:15:03.959 2 DEBUG nova.virt.libvirt.guest [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  8 06:15:03 np0005475493 nova_compute[262220]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  8 06:15:03 np0005475493 nova_compute[262220]:  <nova:name>tempest-TestNetworkBasicOps-server-1473882269</nova:name>
Oct  8 06:15:03 np0005475493 nova_compute[262220]:  <nova:creationTime>2025-10-08 10:15:03</nova:creationTime>
Oct  8 06:15:03 np0005475493 nova_compute[262220]:  <nova:flavor name="m1.nano">
Oct  8 06:15:03 np0005475493 nova_compute[262220]:    <nova:memory>128</nova:memory>
Oct  8 06:15:03 np0005475493 nova_compute[262220]:    <nova:disk>1</nova:disk>
Oct  8 06:15:03 np0005475493 nova_compute[262220]:    <nova:swap>0</nova:swap>
Oct  8 06:15:03 np0005475493 nova_compute[262220]:    <nova:ephemeral>0</nova:ephemeral>
Oct  8 06:15:03 np0005475493 nova_compute[262220]:    <nova:vcpus>1</nova:vcpus>
Oct  8 06:15:03 np0005475493 nova_compute[262220]:  </nova:flavor>
Oct  8 06:15:03 np0005475493 nova_compute[262220]:  <nova:owner>
Oct  8 06:15:03 np0005475493 nova_compute[262220]:    <nova:user uuid="d50b19166a7245e390a6e29682191263">tempest-TestNetworkBasicOps-139500885-project-member</nova:user>
Oct  8 06:15:03 np0005475493 nova_compute[262220]:    <nova:project uuid="0edb2c85b88b4b168a4b2e8c5ed4a05c">tempest-TestNetworkBasicOps-139500885</nova:project>
Oct  8 06:15:03 np0005475493 nova_compute[262220]:  </nova:owner>
Oct  8 06:15:03 np0005475493 nova_compute[262220]:  <nova:root type="image" uuid="e5994bac-385d-4cfe-962e-386aa0559983"/>
Oct  8 06:15:03 np0005475493 nova_compute[262220]:  <nova:ports>
Oct  8 06:15:03 np0005475493 nova_compute[262220]:    <nova:port uuid="be4ec274-2a90-48e8-bd51-fd01f3c659da">
Oct  8 06:15:03 np0005475493 nova_compute[262220]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  8 06:15:03 np0005475493 nova_compute[262220]:    </nova:port>
Oct  8 06:15:03 np0005475493 nova_compute[262220]:    <nova:port uuid="79d28498-fe9d-49dc-ad2c-bde432b239db">
Oct  8 06:15:03 np0005475493 nova_compute[262220]:      <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Oct  8 06:15:03 np0005475493 nova_compute[262220]:    </nova:port>
Oct  8 06:15:03 np0005475493 nova_compute[262220]:  </nova:ports>
Oct  8 06:15:03 np0005475493 nova_compute[262220]: </nova:instance>
Oct  8 06:15:03 np0005475493 nova_compute[262220]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  8 06:15:04 np0005475493 nova_compute[262220]: 2025-10-08 10:15:04.074 2 DEBUG oslo_concurrency.lockutils [None req-5a883b22-8805-4173-b851-76a08557fdf1 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "interface-ea469a2e-bf09-495c-9b5e-02ad38d32d40-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:15:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:15:04 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v914: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 14 KiB/s wr, 1 op/s
Oct  8 06:15:04 np0005475493 podman[274716]: 2025-10-08 10:15:04.145279561 +0000 UTC m=+0.026761572 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  8 06:15:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:04 np0005475493 podman[274716]: 2025-10-08 10:15:04.321208249 +0000 UTC m=+0.202690230 container create 3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  8 06:15:04 np0005475493 systemd[1]: Started libpod-conmon-3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52.scope.
Oct  8 06:15:04 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:15:04 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/915dac930a5508f0d71bb51887deafacf6554c7ddc11a4e1d1f27258efcfd64d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  8 06:15:04 np0005475493 podman[274716]: 2025-10-08 10:15:04.509825865 +0000 UTC m=+0.391307866 container init 3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  8 06:15:04 np0005475493 podman[274716]: 2025-10-08 10:15:04.51555359 +0000 UTC m=+0.397035571 container start 3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  8 06:15:04 np0005475493 neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c[274732]: [NOTICE]   (274736) : New worker (274738) forked
Oct  8 06:15:04 np0005475493 neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c[274732]: [NOTICE]   (274736) : Loading success.
Oct  8 06:15:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70003e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:04.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:04 np0005475493 nova_compute[262220]: 2025-10-08 10:15:04.759 2 DEBUG nova.compute.manager [req-5eb48344-e4bf-4805-8340-a3658783aea3 req-2968b577-c6b9-445d-80e3-78e3ab6aec28 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-vif-plugged-79d28498-fe9d-49dc-ad2c-bde432b239db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:15:04 np0005475493 nova_compute[262220]: 2025-10-08 10:15:04.760 2 DEBUG oslo_concurrency.lockutils [req-5eb48344-e4bf-4805-8340-a3658783aea3 req-2968b577-c6b9-445d-80e3-78e3ab6aec28 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:15:04 np0005475493 nova_compute[262220]: 2025-10-08 10:15:04.760 2 DEBUG oslo_concurrency.lockutils [req-5eb48344-e4bf-4805-8340-a3658783aea3 req-2968b577-c6b9-445d-80e3-78e3ab6aec28 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:15:04 np0005475493 nova_compute[262220]: 2025-10-08 10:15:04.760 2 DEBUG oslo_concurrency.lockutils [req-5eb48344-e4bf-4805-8340-a3658783aea3 req-2968b577-c6b9-445d-80e3-78e3ab6aec28 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:15:04 np0005475493 nova_compute[262220]: 2025-10-08 10:15:04.761 2 DEBUG nova.compute.manager [req-5eb48344-e4bf-4805-8340-a3658783aea3 req-2968b577-c6b9-445d-80e3-78e3ab6aec28 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] No waiting events found dispatching network-vif-plugged-79d28498-fe9d-49dc-ad2c-bde432b239db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  8 06:15:04 np0005475493 nova_compute[262220]: 2025-10-08 10:15:04.761 2 WARNING nova.compute.manager [req-5eb48344-e4bf-4805-8340-a3658783aea3 req-2968b577-c6b9-445d-80e3-78e3ab6aec28 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received unexpected event network-vif-plugged-79d28498-fe9d-49dc-ad2c-bde432b239db for instance with vm_state active and task_state None.#033[00m
Oct  8 06:15:04 np0005475493 ovn_controller[153187]: 2025-10-08T10:15:04Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:40:4d:66 10.100.0.23
Oct  8 06:15:04 np0005475493 ovn_controller[153187]: 2025-10-08T10:15:04Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:40:4d:66 10.100.0.23
Oct  8 06:15:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:05 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:05 np0005475493 nova_compute[262220]: 2025-10-08 10:15:05.319 2 DEBUG nova.network.neutron [req-eb6a668c-e2d8-4e4e-bc69-70e5d18e887d req-e553ac48-3e9d-4c34-87f2-509382fa0272 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updated VIF entry in instance network info cache for port 79d28498-fe9d-49dc-ad2c-bde432b239db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  8 06:15:05 np0005475493 nova_compute[262220]: 2025-10-08 10:15:05.319 2 DEBUG nova.network.neutron [req-eb6a668c-e2d8-4e4e-bc69-70e5d18e887d req-e553ac48-3e9d-4c34-87f2-509382fa0272 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:15:05 np0005475493 nova_compute[262220]: 2025-10-08 10:15:05.409 2 DEBUG oslo_concurrency.lockutils [req-eb6a668c-e2d8-4e4e-bc69-70e5d18e887d req-e553ac48-3e9d-4c34-87f2-509382fa0272 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:15:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:05.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:15:05] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Oct  8 06:15:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:15:05] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Oct  8 06:15:06 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v915: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 2.0 KiB/s wr, 0 op/s
Oct  8 06:15:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94005390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:06.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:06 np0005475493 nova_compute[262220]: 2025-10-08 10:15:06.857 2 DEBUG nova.compute.manager [req-fd34a631-d0f1-40b3-bd3e-331f39e7cb3e req-4d70929d-d8d1-45c6-bd34-3b5e9e515a61 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-vif-plugged-79d28498-fe9d-49dc-ad2c-bde432b239db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:15:06 np0005475493 nova_compute[262220]: 2025-10-08 10:15:06.858 2 DEBUG oslo_concurrency.lockutils [req-fd34a631-d0f1-40b3-bd3e-331f39e7cb3e req-4d70929d-d8d1-45c6-bd34-3b5e9e515a61 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:15:06 np0005475493 nova_compute[262220]: 2025-10-08 10:15:06.858 2 DEBUG oslo_concurrency.lockutils [req-fd34a631-d0f1-40b3-bd3e-331f39e7cb3e req-4d70929d-d8d1-45c6-bd34-3b5e9e515a61 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:15:06 np0005475493 nova_compute[262220]: 2025-10-08 10:15:06.858 2 DEBUG oslo_concurrency.lockutils [req-fd34a631-d0f1-40b3-bd3e-331f39e7cb3e req-4d70929d-d8d1-45c6-bd34-3b5e9e515a61 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:15:06 np0005475493 nova_compute[262220]: 2025-10-08 10:15:06.858 2 DEBUG nova.compute.manager [req-fd34a631-d0f1-40b3-bd3e-331f39e7cb3e req-4d70929d-d8d1-45c6-bd34-3b5e9e515a61 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] No waiting events found dispatching network-vif-plugged-79d28498-fe9d-49dc-ad2c-bde432b239db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  8 06:15:06 np0005475493 nova_compute[262220]: 2025-10-08 10:15:06.858 2 WARNING nova.compute.manager [req-fd34a631-d0f1-40b3-bd3e-331f39e7cb3e req-4d70929d-d8d1-45c6-bd34-3b5e9e515a61 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received unexpected event network-vif-plugged-79d28498-fe9d-49dc-ad2c-bde432b239db for instance with vm_state active and task_state None.#033[00m
Oct  8 06:15:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:15:07.155Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:15:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:15:07.155Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:15:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:15:07.156Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:15:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:07 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac700031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:07.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:07 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:15:08 np0005475493 nova_compute[262220]: 2025-10-08 10:15:08.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:08 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v916: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 2.0 KiB/s wr, 0 op/s
Oct  8 06:15:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:08 np0005475493 nova_compute[262220]: 2025-10-08 10:15:08.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940053b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:08.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:15:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:09 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:09.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:10 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v917: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Oct  8 06:15:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac700031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:10.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:11 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940053d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:11.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v918: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Oct  8 06:15:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac700031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:15:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:12.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:15:12 np0005475493 podman[274780]: 2025-10-08 10:15:12.928243699 +0000 UTC m=+0.085326399 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  8 06:15:13 np0005475493 nova_compute[262220]: 2025-10-08 10:15:13.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:13 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:13 np0005475493 nova_compute[262220]: 2025-10-08 10:15:13.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:13.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:15:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v919: 353 pgs: 353 active+clean; 159 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.7 MiB/s wr, 25 op/s
Oct  8 06:15:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940053f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:14.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:15 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:15.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:15:15] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Oct  8 06:15:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:15:15] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Oct  8 06:15:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v920: 353 pgs: 353 active+clean; 159 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.7 MiB/s wr, 25 op/s
Oct  8 06:15:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94005410 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:16.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:15:17.157Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:15:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:17 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:17.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:15:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:15:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:17 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:15:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:15:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:15:18 np0005475493 nova_compute[262220]: 2025-10-08 10:15:18.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:15:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:15:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:15:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:15:18 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v921: 353 pgs: 353 active+clean; 159 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.7 MiB/s wr, 25 op/s
Oct  8 06:15:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:18 np0005475493 nova_compute[262220]: 2025-10-08 10:15:18.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:18.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:15:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac94005430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:19.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:19 np0005475493 podman[274815]: 2025-10-08 10:15:19.906894438 +0000 UTC m=+0.067936118 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  8 06:15:19 np0005475493 podman[274814]: 2025-10-08 10:15:19.907068094 +0000 UTC m=+0.071737402 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  8 06:15:20 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v922: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct  8 06:15:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac700031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  8 06:15:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2620818139' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  8 06:15:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  8 06:15:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2620818139' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  8 06:15:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:20.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:21 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:21.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:22 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v923: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  8 06:15:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac700031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 06:15:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:15:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 06:15:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:15:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac700031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:22.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:23 np0005475493 nova_compute[262220]: 2025-10-08 10:15:23.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:23 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:23 np0005475493 podman[275029]: 2025-10-08 10:15:23.134089121 +0000 UTC m=+0.023879291 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:15:23 np0005475493 podman[275029]: 2025-10-08 10:15:23.228248344 +0000 UTC m=+0.118038484 container create 751be0a4ea5e3774afb634ec0fb1dc5d32e1c4923a671cc4d28e3368819a7fea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_wilbur, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:15:23 np0005475493 systemd[1]: Started libpod-conmon-751be0a4ea5e3774afb634ec0fb1dc5d32e1c4923a671cc4d28e3368819a7fea.scope.
Oct  8 06:15:23 np0005475493 nova_compute[262220]: 2025-10-08 10:15:23.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:23 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:15:23 np0005475493 podman[275029]: 2025-10-08 10:15:23.360275747 +0000 UTC m=+0.250065907 container init 751be0a4ea5e3774afb634ec0fb1dc5d32e1c4923a671cc4d28e3368819a7fea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:15:23 np0005475493 podman[275029]: 2025-10-08 10:15:23.3712336 +0000 UTC m=+0.261023750 container start 751be0a4ea5e3774afb634ec0fb1dc5d32e1c4923a671cc4d28e3368819a7fea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_wilbur, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:15:23 np0005475493 happy_wilbur[275046]: 167 167
Oct  8 06:15:23 np0005475493 systemd[1]: libpod-751be0a4ea5e3774afb634ec0fb1dc5d32e1c4923a671cc4d28e3368819a7fea.scope: Deactivated successfully.
Oct  8 06:15:23 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:15:23 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:15:23 np0005475493 podman[275029]: 2025-10-08 10:15:23.393730475 +0000 UTC m=+0.283520615 container attach 751be0a4ea5e3774afb634ec0fb1dc5d32e1c4923a671cc4d28e3368819a7fea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_wilbur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  8 06:15:23 np0005475493 podman[275029]: 2025-10-08 10:15:23.394613854 +0000 UTC m=+0.284404014 container died 751be0a4ea5e3774afb634ec0fb1dc5d32e1c4923a671cc4d28e3368819a7fea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:15:23 np0005475493 systemd[1]: var-lib-containers-storage-overlay-419d83d1abbdfa7ec69922c40793dc189e2e0003fb8d089888941dcb9d2581e0-merged.mount: Deactivated successfully.
Oct  8 06:15:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:23.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:23 np0005475493 podman[275029]: 2025-10-08 10:15:23.616491401 +0000 UTC m=+0.506281541 container remove 751be0a4ea5e3774afb634ec0fb1dc5d32e1c4923a671cc4d28e3368819a7fea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  8 06:15:23 np0005475493 systemd[1]: libpod-conmon-751be0a4ea5e3774afb634ec0fb1dc5d32e1c4923a671cc4d28e3368819a7fea.scope: Deactivated successfully.
Oct  8 06:15:23 np0005475493 podman[275071]: 2025-10-08 10:15:23.819689367 +0000 UTC m=+0.051197300 container create 0084798e784a0469155ebd49450f6302b75d3d1851cacc63bc55a6e11607b759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_jemison, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:15:23 np0005475493 systemd[1]: Started libpod-conmon-0084798e784a0469155ebd49450f6302b75d3d1851cacc63bc55a6e11607b759.scope.
Oct  8 06:15:23 np0005475493 podman[275071]: 2025-10-08 10:15:23.804067103 +0000 UTC m=+0.035575066 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:15:23 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:15:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3104f5e1672bd986c46cb8537ed366c4308593af5442ec2c2bc6401aad69a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:15:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3104f5e1672bd986c46cb8537ed366c4308593af5442ec2c2bc6401aad69a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:15:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3104f5e1672bd986c46cb8537ed366c4308593af5442ec2c2bc6401aad69a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:15:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3104f5e1672bd986c46cb8537ed366c4308593af5442ec2c2bc6401aad69a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:15:23 np0005475493 podman[275071]: 2025-10-08 10:15:23.923819912 +0000 UTC m=+0.155327855 container init 0084798e784a0469155ebd49450f6302b75d3d1851cacc63bc55a6e11607b759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_jemison, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:15:23 np0005475493 podman[275071]: 2025-10-08 10:15:23.929917627 +0000 UTC m=+0.161425570 container start 0084798e784a0469155ebd49450f6302b75d3d1851cacc63bc55a6e11607b759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  8 06:15:23 np0005475493 podman[275071]: 2025-10-08 10:15:23.934450334 +0000 UTC m=+0.165958277 container attach 0084798e784a0469155ebd49450f6302b75d3d1851cacc63bc55a6e11607b759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_jemison, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:15:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:15:24 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v924: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct  8 06:15:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 06:15:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:15:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 06:15:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]: [
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:    {
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:        "available": false,
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:        "being_replaced": false,
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:        "ceph_device_lvm": false,
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:        "lsm_data": {},
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:        "lvs": [],
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:        "path": "/dev/sr0",
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:        "rejected_reasons": [
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            "Has a FileSystem",
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            "Insufficient space (<5GB)"
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:        ],
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:        "sys_api": {
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            "actuators": null,
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            "device_nodes": [
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:                "sr0"
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            ],
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            "devname": "sr0",
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            "human_readable_size": "482.00 KB",
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            "id_bus": "ata",
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            "model": "QEMU DVD-ROM",
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            "nr_requests": "2",
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            "parent": "/dev/sr0",
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            "partitions": {},
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            "path": "/dev/sr0",
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            "removable": "1",
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            "rev": "2.5+",
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            "ro": "0",
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            "rotational": "0",
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            "sas_address": "",
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            "sas_device_handle": "",
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            "scheduler_mode": "mq-deadline",
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            "sectors": 0,
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            "sectorsize": "2048",
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            "size": 493568.0,
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            "support_discard": "2048",
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            "type": "disk",
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:            "vendor": "QEMU"
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:        }
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]:    }
Oct  8 06:15:24 np0005475493 hardcore_jemison[275086]: ]
Oct  8 06:15:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac700031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:24 np0005475493 systemd[1]: libpod-0084798e784a0469155ebd49450f6302b75d3d1851cacc63bc55a6e11607b759.scope: Deactivated successfully.
Oct  8 06:15:24 np0005475493 podman[276393]: 2025-10-08 10:15:24.766316862 +0000 UTC m=+0.024537302 container died 0084798e784a0469155ebd49450f6302b75d3d1851cacc63bc55a6e11607b759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Oct  8 06:15:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:24.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:24 np0005475493 systemd[1]: var-lib-containers-storage-overlay-3c3104f5e1672bd986c46cb8537ed366c4308593af5442ec2c2bc6401aad69a5-merged.mount: Deactivated successfully.
Oct  8 06:15:24 np0005475493 podman[276393]: 2025-10-08 10:15:24.80382982 +0000 UTC m=+0.062050240 container remove 0084798e784a0469155ebd49450f6302b75d3d1851cacc63bc55a6e11607b759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:15:24 np0005475493 systemd[1]: libpod-conmon-0084798e784a0469155ebd49450f6302b75d3d1851cacc63bc55a6e11607b759.scope: Deactivated successfully.
Oct  8 06:15:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:15:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:15:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:15:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:15:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:15:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:15:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:15:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:15:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:15:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:15:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:15:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:15:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:15:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:15:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:15:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:15:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:15:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:15:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:25 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac700031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:25 np0005475493 podman[276501]: 2025-10-08 10:15:25.468159011 +0000 UTC m=+0.043139101 container create 2978f8ea8183bf47890c0a14b87fdcd3fb7730c7bf1d0208965c12cf6de295c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_euclid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  8 06:15:25 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:15:25 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:15:25 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:15:25 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:15:25 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:15:25 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:15:25 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:15:25 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:15:25 np0005475493 systemd[1]: Started libpod-conmon-2978f8ea8183bf47890c0a14b87fdcd3fb7730c7bf1d0208965c12cf6de295c9.scope.
Oct  8 06:15:25 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:15:25 np0005475493 podman[276501]: 2025-10-08 10:15:25.452841067 +0000 UTC m=+0.027821187 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:15:25 np0005475493 podman[276501]: 2025-10-08 10:15:25.56157192 +0000 UTC m=+0.136552040 container init 2978f8ea8183bf47890c0a14b87fdcd3fb7730c7bf1d0208965c12cf6de295c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  8 06:15:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:25.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:25 np0005475493 podman[276501]: 2025-10-08 10:15:25.570263711 +0000 UTC m=+0.145243811 container start 2978f8ea8183bf47890c0a14b87fdcd3fb7730c7bf1d0208965c12cf6de295c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct  8 06:15:25 np0005475493 silly_euclid[276518]: 167 167
Oct  8 06:15:25 np0005475493 podman[276501]: 2025-10-08 10:15:25.574158305 +0000 UTC m=+0.149138415 container attach 2978f8ea8183bf47890c0a14b87fdcd3fb7730c7bf1d0208965c12cf6de295c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_euclid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:15:25 np0005475493 podman[276501]: 2025-10-08 10:15:25.574775916 +0000 UTC m=+0.149756046 container died 2978f8ea8183bf47890c0a14b87fdcd3fb7730c7bf1d0208965c12cf6de295c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_euclid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct  8 06:15:25 np0005475493 systemd[1]: libpod-2978f8ea8183bf47890c0a14b87fdcd3fb7730c7bf1d0208965c12cf6de295c9.scope: Deactivated successfully.
Oct  8 06:15:25 np0005475493 systemd[1]: var-lib-containers-storage-overlay-629f31cf0938bfa229acb8da33f5cae40a53bb6f135c2e131c6a9a256eb111c9-merged.mount: Deactivated successfully.
Oct  8 06:15:25 np0005475493 podman[276501]: 2025-10-08 10:15:25.624990963 +0000 UTC m=+0.199971073 container remove 2978f8ea8183bf47890c0a14b87fdcd3fb7730c7bf1d0208965c12cf6de295c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  8 06:15:25 np0005475493 systemd[1]: libpod-conmon-2978f8ea8183bf47890c0a14b87fdcd3fb7730c7bf1d0208965c12cf6de295c9.scope: Deactivated successfully.
Oct  8 06:15:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:15:25] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Oct  8 06:15:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:15:25] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Oct  8 06:15:25 np0005475493 podman[276542]: 2025-10-08 10:15:25.79192529 +0000 UTC m=+0.045633970 container create a33e39537e4a78a330dfcd8fc752c37dc0610457f10427d4edcb350a614dcb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_payne, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct  8 06:15:25 np0005475493 systemd[1]: Started libpod-conmon-a33e39537e4a78a330dfcd8fc752c37dc0610457f10427d4edcb350a614dcb37.scope.
Oct  8 06:15:25 np0005475493 podman[276542]: 2025-10-08 10:15:25.770675966 +0000 UTC m=+0.024384676 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:15:25 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:15:25 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0dc4130190bd0708f84de38f2254b23f9004228e25b21392b0f0d91a32f3618/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:15:25 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0dc4130190bd0708f84de38f2254b23f9004228e25b21392b0f0d91a32f3618/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:15:25 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0dc4130190bd0708f84de38f2254b23f9004228e25b21392b0f0d91a32f3618/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:15:25 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0dc4130190bd0708f84de38f2254b23f9004228e25b21392b0f0d91a32f3618/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:15:25 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0dc4130190bd0708f84de38f2254b23f9004228e25b21392b0f0d91a32f3618/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:15:25 np0005475493 podman[276542]: 2025-10-08 10:15:25.892237132 +0000 UTC m=+0.145945812 container init a33e39537e4a78a330dfcd8fc752c37dc0610457f10427d4edcb350a614dcb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:15:25 np0005475493 podman[276542]: 2025-10-08 10:15:25.902205293 +0000 UTC m=+0.155913963 container start a33e39537e4a78a330dfcd8fc752c37dc0610457f10427d4edcb350a614dcb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_payne, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:15:25 np0005475493 podman[276542]: 2025-10-08 10:15:25.905932294 +0000 UTC m=+0.159640994 container attach a33e39537e4a78a330dfcd8fc752c37dc0610457f10427d4edcb350a614dcb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:15:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v925: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 102 KiB/s wr, 78 op/s
Oct  8 06:15:26 np0005475493 quirky_payne[276558]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:15:26 np0005475493 quirky_payne[276558]: --> All data devices are unavailable
Oct  8 06:15:26 np0005475493 systemd[1]: libpod-a33e39537e4a78a330dfcd8fc752c37dc0610457f10427d4edcb350a614dcb37.scope: Deactivated successfully.
Oct  8 06:15:26 np0005475493 podman[276542]: 2025-10-08 10:15:26.290309795 +0000 UTC m=+0.544018465 container died a33e39537e4a78a330dfcd8fc752c37dc0610457f10427d4edcb350a614dcb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_payne, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  8 06:15:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:26 np0005475493 systemd[1]: var-lib-containers-storage-overlay-d0dc4130190bd0708f84de38f2254b23f9004228e25b21392b0f0d91a32f3618-merged.mount: Deactivated successfully.
Oct  8 06:15:26 np0005475493 podman[276542]: 2025-10-08 10:15:26.33236077 +0000 UTC m=+0.586069440 container remove a33e39537e4a78a330dfcd8fc752c37dc0610457f10427d4edcb350a614dcb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_payne, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  8 06:15:26 np0005475493 systemd[1]: libpod-conmon-a33e39537e4a78a330dfcd8fc752c37dc0610457f10427d4edcb350a614dcb37.scope: Deactivated successfully.
Oct  8 06:15:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:15:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:26.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:15:26 np0005475493 podman[276681]: 2025-10-08 10:15:26.953850811 +0000 UTC m=+0.039372760 container create f1d26108791e7b7f34a7efcd453e8e2ac70e7ad4b8f82607207a0c0c85a07a44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_banach, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:15:26 np0005475493 systemd[1]: Started libpod-conmon-f1d26108791e7b7f34a7efcd453e8e2ac70e7ad4b8f82607207a0c0c85a07a44.scope.
Oct  8 06:15:27 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:15:27 np0005475493 podman[276681]: 2025-10-08 10:15:26.937541416 +0000 UTC m=+0.023063395 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:15:27 np0005475493 podman[276681]: 2025-10-08 10:15:27.03420303 +0000 UTC m=+0.119725009 container init f1d26108791e7b7f34a7efcd453e8e2ac70e7ad4b8f82607207a0c0c85a07a44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:15:27 np0005475493 podman[276681]: 2025-10-08 10:15:27.041468274 +0000 UTC m=+0.126990223 container start f1d26108791e7b7f34a7efcd453e8e2ac70e7ad4b8f82607207a0c0c85a07a44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:15:27 np0005475493 podman[276681]: 2025-10-08 10:15:27.044485971 +0000 UTC m=+0.130007960 container attach f1d26108791e7b7f34a7efcd453e8e2ac70e7ad4b8f82607207a0c0c85a07a44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:15:27 np0005475493 exciting_banach[276697]: 167 167
Oct  8 06:15:27 np0005475493 podman[276681]: 2025-10-08 10:15:27.049102319 +0000 UTC m=+0.134624298 container died f1d26108791e7b7f34a7efcd453e8e2ac70e7ad4b8f82607207a0c0c85a07a44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  8 06:15:27 np0005475493 systemd[1]: libpod-f1d26108791e7b7f34a7efcd453e8e2ac70e7ad4b8f82607207a0c0c85a07a44.scope: Deactivated successfully.
Oct  8 06:15:27 np0005475493 systemd[1]: var-lib-containers-storage-overlay-665a6ac2fce105bdc07a9f3599e21d2078ad6e65911087706dfe0970b4a2e787-merged.mount: Deactivated successfully.
Oct  8 06:15:27 np0005475493 podman[276681]: 2025-10-08 10:15:27.082739483 +0000 UTC m=+0.168261442 container remove f1d26108791e7b7f34a7efcd453e8e2ac70e7ad4b8f82607207a0c0c85a07a44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_banach, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  8 06:15:27 np0005475493 systemd[1]: libpod-conmon-f1d26108791e7b7f34a7efcd453e8e2ac70e7ad4b8f82607207a0c0c85a07a44.scope: Deactivated successfully.
Oct  8 06:15:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:15:27.157Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:15:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:15:27.161Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:15:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:27 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:27 np0005475493 podman[276722]: 2025-10-08 10:15:27.259076493 +0000 UTC m=+0.043670337 container create 378eccdb956d6119c65977b195aa2434ab18554ac334fa2534fe7b39d65cc717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_dirac, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  8 06:15:27 np0005475493 systemd[1]: Started libpod-conmon-378eccdb956d6119c65977b195aa2434ab18554ac334fa2534fe7b39d65cc717.scope.
Oct  8 06:15:27 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:15:27 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399565bcd0bc24d3975bf9b74a7a29a4a1d0a44ec4426331050bad5fbd24180c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:15:27 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399565bcd0bc24d3975bf9b74a7a29a4a1d0a44ec4426331050bad5fbd24180c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:15:27 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399565bcd0bc24d3975bf9b74a7a29a4a1d0a44ec4426331050bad5fbd24180c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:15:27 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399565bcd0bc24d3975bf9b74a7a29a4a1d0a44ec4426331050bad5fbd24180c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:15:27 np0005475493 podman[276722]: 2025-10-08 10:15:27.242233641 +0000 UTC m=+0.026827515 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:15:27 np0005475493 podman[276722]: 2025-10-08 10:15:27.341279942 +0000 UTC m=+0.125873806 container init 378eccdb956d6119c65977b195aa2434ab18554ac334fa2534fe7b39d65cc717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_dirac, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  8 06:15:27 np0005475493 podman[276722]: 2025-10-08 10:15:27.350314643 +0000 UTC m=+0.134908487 container start 378eccdb956d6119c65977b195aa2434ab18554ac334fa2534fe7b39d65cc717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:15:27 np0005475493 podman[276722]: 2025-10-08 10:15:27.353457895 +0000 UTC m=+0.138051759 container attach 378eccdb956d6119c65977b195aa2434ab18554ac334fa2534fe7b39d65cc717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  8 06:15:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:27.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:27 np0005475493 musing_dirac[276739]: {
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:    "1": [
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:        {
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:            "devices": [
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:                "/dev/loop3"
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:            ],
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:            "lv_name": "ceph_lv0",
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:            "lv_size": "21470642176",
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:            "name": "ceph_lv0",
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:            "tags": {
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:                "ceph.cluster_name": "ceph",
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:                "ceph.crush_device_class": "",
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:                "ceph.encrypted": "0",
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:                "ceph.osd_id": "1",
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:                "ceph.type": "block",
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:                "ceph.vdo": "0",
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:                "ceph.with_tpm": "0"
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:            },
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:            "type": "block",
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:            "vg_name": "ceph_vg0"
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:        }
Oct  8 06:15:27 np0005475493 musing_dirac[276739]:    ]
Oct  8 06:15:27 np0005475493 musing_dirac[276739]: }
Oct  8 06:15:27 np0005475493 systemd[1]: libpod-378eccdb956d6119c65977b195aa2434ab18554ac334fa2534fe7b39d65cc717.scope: Deactivated successfully.
Oct  8 06:15:27 np0005475493 conmon[276739]: conmon 378eccdb956d6119c659 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-378eccdb956d6119c65977b195aa2434ab18554ac334fa2534fe7b39d65cc717.scope/container/memory.events
Oct  8 06:15:27 np0005475493 podman[276722]: 2025-10-08 10:15:27.667397307 +0000 UTC m=+0.451991171 container died 378eccdb956d6119c65977b195aa2434ab18554ac334fa2534fe7b39d65cc717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_dirac, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:15:27 np0005475493 systemd[1]: var-lib-containers-storage-overlay-399565bcd0bc24d3975bf9b74a7a29a4a1d0a44ec4426331050bad5fbd24180c-merged.mount: Deactivated successfully.
Oct  8 06:15:27 np0005475493 podman[276722]: 2025-10-08 10:15:27.709782353 +0000 UTC m=+0.494376197 container remove 378eccdb956d6119c65977b195aa2434ab18554ac334fa2534fe7b39d65cc717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Oct  8 06:15:27 np0005475493 systemd[1]: libpod-conmon-378eccdb956d6119c65977b195aa2434ab18554ac334fa2534fe7b39d65cc717.scope: Deactivated successfully.
Oct  8 06:15:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:27 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:15:28 np0005475493 nova_compute[262220]: 2025-10-08 10:15:28.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:28 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v926: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 102 KiB/s wr, 78 op/s
Oct  8 06:15:28 np0005475493 podman[276876]: 2025-10-08 10:15:28.304392958 +0000 UTC m=+0.041131956 container create eee85027f91ef248f1f07bf738aaddbc24da0ad90ce017e827bc0f752abe5654 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_greider, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:15:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c001ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:28 np0005475493 nova_compute[262220]: 2025-10-08 10:15:28.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:28 np0005475493 systemd[1]: Started libpod-conmon-eee85027f91ef248f1f07bf738aaddbc24da0ad90ce017e827bc0f752abe5654.scope.
Oct  8 06:15:28 np0005475493 podman[276876]: 2025-10-08 10:15:28.289022202 +0000 UTC m=+0.025761230 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:15:28 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:15:28 np0005475493 podman[276876]: 2025-10-08 10:15:28.419385312 +0000 UTC m=+0.156124360 container init eee85027f91ef248f1f07bf738aaddbc24da0ad90ce017e827bc0f752abe5654 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_greider, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:15:28 np0005475493 podman[276876]: 2025-10-08 10:15:28.427360859 +0000 UTC m=+0.164099857 container start eee85027f91ef248f1f07bf738aaddbc24da0ad90ce017e827bc0f752abe5654 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_greider, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:15:28 np0005475493 podman[276876]: 2025-10-08 10:15:28.430599553 +0000 UTC m=+0.167338571 container attach eee85027f91ef248f1f07bf738aaddbc24da0ad90ce017e827bc0f752abe5654 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_greider, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:15:28 np0005475493 determined_greider[276893]: 167 167
Oct  8 06:15:28 np0005475493 systemd[1]: libpod-eee85027f91ef248f1f07bf738aaddbc24da0ad90ce017e827bc0f752abe5654.scope: Deactivated successfully.
Oct  8 06:15:28 np0005475493 conmon[276893]: conmon eee85027f91ef248f1f0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eee85027f91ef248f1f07bf738aaddbc24da0ad90ce017e827bc0f752abe5654.scope/container/memory.events
Oct  8 06:15:28 np0005475493 podman[276898]: 2025-10-08 10:15:28.482273998 +0000 UTC m=+0.030892567 container died eee85027f91ef248f1f07bf738aaddbc24da0ad90ce017e827bc0f752abe5654 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_greider, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct  8 06:15:28 np0005475493 systemd[1]: var-lib-containers-storage-overlay-82a631b398f69a9460cda078958d386ae210489e09578fbad94a4a0463545263-merged.mount: Deactivated successfully.
Oct  8 06:15:28 np0005475493 podman[276898]: 2025-10-08 10:15:28.527945889 +0000 UTC m=+0.076564458 container remove eee85027f91ef248f1f07bf738aaddbc24da0ad90ce017e827bc0f752abe5654 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:15:28 np0005475493 systemd[1]: libpod-conmon-eee85027f91ef248f1f07bf738aaddbc24da0ad90ce017e827bc0f752abe5654.scope: Deactivated successfully.
Oct  8 06:15:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:28 np0005475493 podman[276920]: 2025-10-08 10:15:28.756284634 +0000 UTC m=+0.049452223 container create be34c998fafdbf02c16e4cad1dab436738b820fe4c3b2cfae4e6e60a1b4c082e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_black, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct  8 06:15:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:15:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:28.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:15:28 np0005475493 systemd[1]: Started libpod-conmon-be34c998fafdbf02c16e4cad1dab436738b820fe4c3b2cfae4e6e60a1b4c082e.scope.
Oct  8 06:15:28 np0005475493 podman[276920]: 2025-10-08 10:15:28.733954915 +0000 UTC m=+0.027122494 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:15:28 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:15:28 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c062af1fa40a308cec23b42585dc5459833e5f2693022264275a781097eec226/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:15:28 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c062af1fa40a308cec23b42585dc5459833e5f2693022264275a781097eec226/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:15:28 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c062af1fa40a308cec23b42585dc5459833e5f2693022264275a781097eec226/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:15:28 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c062af1fa40a308cec23b42585dc5459833e5f2693022264275a781097eec226/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:15:28 np0005475493 podman[276920]: 2025-10-08 10:15:28.850717857 +0000 UTC m=+0.143885426 container init be34c998fafdbf02c16e4cad1dab436738b820fe4c3b2cfae4e6e60a1b4c082e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_black, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:15:28 np0005475493 podman[276920]: 2025-10-08 10:15:28.86166435 +0000 UTC m=+0.154831899 container start be34c998fafdbf02c16e4cad1dab436738b820fe4c3b2cfae4e6e60a1b4c082e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_black, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:15:28 np0005475493 podman[276920]: 2025-10-08 10:15:28.865395499 +0000 UTC m=+0.158563058 container attach be34c998fafdbf02c16e4cad1dab436738b820fe4c3b2cfae4e6e60a1b4c082e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:15:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:15:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:29 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:29 np0005475493 lvm[277012]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:15:29 np0005475493 lvm[277012]: VG ceph_vg0 finished
Oct  8 06:15:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:15:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:29.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:15:29 np0005475493 nostalgic_black[276937]: {}
Oct  8 06:15:29 np0005475493 systemd[1]: libpod-be34c998fafdbf02c16e4cad1dab436738b820fe4c3b2cfae4e6e60a1b4c082e.scope: Deactivated successfully.
Oct  8 06:15:29 np0005475493 systemd[1]: libpod-be34c998fafdbf02c16e4cad1dab436738b820fe4c3b2cfae4e6e60a1b4c082e.scope: Consumed 1.167s CPU time.
Oct  8 06:15:29 np0005475493 podman[276920]: 2025-10-08 10:15:29.628349298 +0000 UTC m=+0.921516847 container died be34c998fafdbf02c16e4cad1dab436738b820fe4c3b2cfae4e6e60a1b4c082e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct  8 06:15:29 np0005475493 systemd[1]: var-lib-containers-storage-overlay-c062af1fa40a308cec23b42585dc5459833e5f2693022264275a781097eec226-merged.mount: Deactivated successfully.
Oct  8 06:15:29 np0005475493 podman[276920]: 2025-10-08 10:15:29.669912856 +0000 UTC m=+0.963080405 container remove be34c998fafdbf02c16e4cad1dab436738b820fe4c3b2cfae4e6e60a1b4c082e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_black, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  8 06:15:29 np0005475493 systemd[1]: libpod-conmon-be34c998fafdbf02c16e4cad1dab436738b820fe4c3b2cfae4e6e60a1b4c082e.scope: Deactivated successfully.
Oct  8 06:15:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:15:29 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:15:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:15:29 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:15:30 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v927: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 103 KiB/s wr, 79 op/s
Oct  8 06:15:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c001ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:30 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:15:30 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:15:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:30.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:15:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:31.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:15:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v928: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Oct  8 06:15:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:15:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:32.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:15:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:15:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:15:32 np0005475493 podman[277058]: 2025-10-08 10:15:32.918487616 +0000 UTC m=+0.073193579 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3)
Oct  8 06:15:33 np0005475493 nova_compute[262220]: 2025-10-08 10:15:33.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:33 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:33 np0005475493 nova_compute[262220]: 2025-10-08 10:15:33.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:33.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:15:34 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v929: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct  8 06:15:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:15:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:34.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:15:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:35 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:35.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:15:35] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Oct  8 06:15:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:15:35] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Oct  8 06:15:36 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v930: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct  8 06:15:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:36.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:15:37.162Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:15:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:15:37.162Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:15:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:37 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:15:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:37.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:15:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:37 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:15:38 np0005475493 nova_compute[262220]: 2025-10-08 10:15:38.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:38 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v931: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct  8 06:15:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:38 np0005475493 nova_compute[262220]: 2025-10-08 10:15:38.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:38.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:15:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:39 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:39.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:39 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:39.628 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  8 06:15:39 np0005475493 nova_compute[262220]: 2025-10-08 10:15:39.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:39 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:39.629 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  8 06:15:39 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:39.630 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:15:39 np0005475493 nova_compute[262220]: 2025-10-08 10:15:39.811 2 DEBUG nova.compute.manager [req-cff2f628-77bf-453b-96fb-95f01c352100 req-cfde9a3d-d9ed-4803-9e8e-e822c178b303 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-changed-79d28498-fe9d-49dc-ad2c-bde432b239db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:15:39 np0005475493 nova_compute[262220]: 2025-10-08 10:15:39.811 2 DEBUG nova.compute.manager [req-cff2f628-77bf-453b-96fb-95f01c352100 req-cfde9a3d-d9ed-4803-9e8e-e822c178b303 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Refreshing instance network info cache due to event network-changed-79d28498-fe9d-49dc-ad2c-bde432b239db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  8 06:15:39 np0005475493 nova_compute[262220]: 2025-10-08 10:15:39.812 2 DEBUG oslo_concurrency.lockutils [req-cff2f628-77bf-453b-96fb-95f01c352100 req-cfde9a3d-d9ed-4803-9e8e-e822c178b303 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:15:39 np0005475493 nova_compute[262220]: 2025-10-08 10:15:39.812 2 DEBUG oslo_concurrency.lockutils [req-cff2f628-77bf-453b-96fb-95f01c352100 req-cfde9a3d-d9ed-4803-9e8e-e822c178b303 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:15:39 np0005475493 nova_compute[262220]: 2025-10-08 10:15:39.812 2 DEBUG nova.network.neutron [req-cff2f628-77bf-453b-96fb-95f01c352100 req-cfde9a3d-d9ed-4803-9e8e-e822c178b303 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Refreshing network info cache for port 79d28498-fe9d-49dc-ad2c-bde432b239db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  8 06:15:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v932: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Oct  8 06:15:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:40.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:40 np0005475493 nova_compute[262220]: 2025-10-08 10:15:40.871 2 DEBUG nova.network.neutron [req-cff2f628-77bf-453b-96fb-95f01c352100 req-cfde9a3d-d9ed-4803-9e8e-e822c178b303 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updated VIF entry in instance network info cache for port 79d28498-fe9d-49dc-ad2c-bde432b239db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  8 06:15:40 np0005475493 nova_compute[262220]: 2025-10-08 10:15:40.871 2 DEBUG nova.network.neutron [req-cff2f628-77bf-453b-96fb-95f01c352100 req-cfde9a3d-d9ed-4803-9e8e-e822c178b303 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:15:40 np0005475493 nova_compute[262220]: 2025-10-08 10:15:40.934 2 DEBUG oslo_concurrency.lockutils [req-cff2f628-77bf-453b-96fb-95f01c352100 req-cfde9a3d-d9ed-4803-9e8e-e822c178b303 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:15:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:41 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:41.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v933: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Oct  8 06:15:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:42.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:43 np0005475493 nova_compute[262220]: 2025-10-08 10:15:43.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:43 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:43 np0005475493 nova_compute[262220]: 2025-10-08 10:15:43.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:15:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:43.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:15:43 np0005475493 nova_compute[262220]: 2025-10-08 10:15:43.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:15:43 np0005475493 nova_compute[262220]: 2025-10-08 10:15:43.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:15:43 np0005475493 nova_compute[262220]: 2025-10-08 10:15:43.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:15:43 np0005475493 nova_compute[262220]: 2025-10-08 10:15:43.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:15:43 np0005475493 nova_compute[262220]: 2025-10-08 10:15:43.924 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:15:43 np0005475493 nova_compute[262220]: 2025-10-08 10:15:43.925 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:15:43 np0005475493 nova_compute[262220]: 2025-10-08 10:15:43.925 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:15:43 np0005475493 nova_compute[262220]: 2025-10-08 10:15:43.926 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:15:43 np0005475493 nova_compute[262220]: 2025-10-08 10:15:43.926 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:15:43 np0005475493 podman[277089]: 2025-10-08 10:15:43.936145222 +0000 UTC m=+0.093285445 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  8 06:15:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:15:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v934: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Oct  8 06:15:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:15:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/165788996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:15:44 np0005475493 nova_compute[262220]: 2025-10-08 10:15:44.367 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:15:44 np0005475493 nova_compute[262220]: 2025-10-08 10:15:44.524 2 DEBUG nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  8 06:15:44 np0005475493 nova_compute[262220]: 2025-10-08 10:15:44.525 2 DEBUG nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  8 06:15:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:44 np0005475493 nova_compute[262220]: 2025-10-08 10:15:44.749 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:15:44 np0005475493 nova_compute[262220]: 2025-10-08 10:15:44.750 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4343MB free_disk=59.89706802368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:15:44 np0005475493 nova_compute[262220]: 2025-10-08 10:15:44.750 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:15:44 np0005475493 nova_compute[262220]: 2025-10-08 10:15:44.751 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:15:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:44.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:44 np0005475493 nova_compute[262220]: 2025-10-08 10:15:44.871 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Instance ea469a2e-bf09-495c-9b5e-02ad38d32d40 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  8 06:15:44 np0005475493 nova_compute[262220]: 2025-10-08 10:15:44.871 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:15:44 np0005475493 nova_compute[262220]: 2025-10-08 10:15:44.871 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:15:44 np0005475493 nova_compute[262220]: 2025-10-08 10:15:44.911 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:15:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:45 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:45 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:15:45 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3461361738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:15:45 np0005475493 nova_compute[262220]: 2025-10-08 10:15:45.395 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:15:45 np0005475493 nova_compute[262220]: 2025-10-08 10:15:45.403 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:15:45 np0005475493 nova_compute[262220]: 2025-10-08 10:15:45.431 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:15:45 np0005475493 nova_compute[262220]: 2025-10-08 10:15:45.433 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:15:45 np0005475493 nova_compute[262220]: 2025-10-08 10:15:45.434 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:15:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:45.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:15:45] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Oct  8 06:15:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:15:45] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Oct  8 06:15:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v935: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 24 KiB/s wr, 2 op/s
Oct  8 06:15:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:46.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:15:47.163Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:15:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:47 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:47 np0005475493 nova_compute[262220]: 2025-10-08 10:15:47.433 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:15:47 np0005475493 nova_compute[262220]: 2025-10-08 10:15:47.434 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:15:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:47.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:15:47
Oct  8 06:15:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:15:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:15:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['images', 'vms', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', '.nfs', 'default.rgw.meta', 'default.rgw.control']
Oct  8 06:15:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:15:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:15:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:15:47 np0005475493 nova_compute[262220]: 2025-10-08 10:15:47.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:15:47 np0005475493 nova_compute[262220]: 2025-10-08 10:15:47.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  8 06:15:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:47 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:15:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:15:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:15:48 np0005475493 nova_compute[262220]: 2025-10-08 10:15:48.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001520898958943804 of space, bias 1.0, pg target 0.4562696876831412 quantized to 32 (current 32)
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v936: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 24 KiB/s wr, 2 op/s
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:15:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:15:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:48 np0005475493 nova_compute[262220]: 2025-10-08 10:15:48.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:48 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:48.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:48 np0005475493 nova_compute[262220]: 2025-10-08 10:15:48.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:15:48 np0005475493 nova_compute[262220]: 2025-10-08 10:15:48.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  8 06:15:48 np0005475493 nova_compute[262220]: 2025-10-08 10:15:48.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  8 06:15:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:15:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:49 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:49 np0005475493 nova_compute[262220]: 2025-10-08 10:15:49.359 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:15:49 np0005475493 nova_compute[262220]: 2025-10-08 10:15:49.360 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquired lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:15:49 np0005475493 nova_compute[262220]: 2025-10-08 10:15:49.360 2 DEBUG nova.network.neutron [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  8 06:15:49 np0005475493 nova_compute[262220]: 2025-10-08 10:15:49.360 2 DEBUG nova.objects.instance [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ea469a2e-bf09-495c-9b5e-02ad38d32d40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  8 06:15:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:49.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:50 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v937: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 26 KiB/s wr, 3 op/s
Oct  8 06:15:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:50 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:50 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:50.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:50 np0005475493 podman[277192]: 2025-10-08 10:15:50.91524029 +0000 UTC m=+0.067862357 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2)
Oct  8 06:15:50 np0005475493 podman[277193]: 2025-10-08 10:15:50.928876989 +0000 UTC m=+0.072616540 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  8 06:15:51 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:51 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac78003d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:51.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:52 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v938: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 4.7 KiB/s wr, 1 op/s
Oct  8 06:15:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:52 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:15:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:52.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:15:53 np0005475493 nova_compute[262220]: 2025-10-08 10:15:53.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:53 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:53 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:53 np0005475493 nova_compute[262220]: 2025-10-08 10:15:53.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:53.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:15:54 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v939: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 8.3 KiB/s wr, 20 op/s
Oct  8 06:15:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac780046b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:54 np0005475493 nova_compute[262220]: 2025-10-08 10:15:54.390 2 DEBUG nova.network.neutron [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:15:54 np0005475493 nova_compute[262220]: 2025-10-08 10:15:54.413 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Releasing lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:15:54 np0005475493 nova_compute[262220]: 2025-10-08 10:15:54.413 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  8 06:15:54 np0005475493 nova_compute[262220]: 2025-10-08 10:15:54.414 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:15:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:54 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:15:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:54.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:15:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:55 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:15:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:55.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:15:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:15:55] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Oct  8 06:15:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:15:55] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Oct  8 06:15:56 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v940: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 6.0 KiB/s wr, 20 op/s
Oct  8 06:15:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca000b4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:56 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:56 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac780046b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:56.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:15:57.164Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:15:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:57 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:57.415 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:15:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:57.415 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:15:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:15:57.416 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:15:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:57.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:57 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:15:58 np0005475493 nova_compute[262220]: 2025-10-08 10:15:58.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:58 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v941: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 6.0 KiB/s wr, 20 op/s
Oct  8 06:15:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:58 np0005475493 nova_compute[262220]: 2025-10-08 10:15:58.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:15:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:58 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:15:58.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:15:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:15:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:15:59 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:15:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:15:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:15:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:15:59.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:00 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v942: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 8.0 KiB/s wr, 154 op/s
Oct  8 06:16:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:00 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:00 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:16:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:00.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:16:01 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:01 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:16:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:01.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:16:02 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v943: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 5.7 KiB/s wr, 153 op/s
Oct  8 06:16:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:02 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:02 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:16:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:02.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:16:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:16:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:16:03 np0005475493 nova_compute[262220]: 2025-10-08 10:16:03.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:03 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:03 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:03 np0005475493 nova_compute[262220]: 2025-10-08 10:16:03.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:16:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:03.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:16:03 np0005475493 podman[277244]: 2025-10-08 10:16:03.904691016 +0000 UTC m=+0.061081499 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  8 06:16:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:16:04 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v944: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 6.0 KiB/s wr, 153 op/s
Oct  8 06:16:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:04 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:04.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:05 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:16:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:05.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:16:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:16:05] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct  8 06:16:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:16:05] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct  8 06:16:06 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v945: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 2.3 KiB/s wr, 134 op/s
Oct  8 06:16:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:06 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:06 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:16:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:06.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:16:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:16:07.165Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:16:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:07 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:07.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:07 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:16:08 np0005475493 nova_compute[262220]: 2025-10-08 10:16:08.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:08 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v946: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 2.3 KiB/s wr, 134 op/s
Oct  8 06:16:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:08 np0005475493 nova_compute[262220]: 2025-10-08 10:16:08.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:08 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:16:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:08.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:16:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:16:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:09 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:09 np0005475493 ovn_controller[153187]: 2025-10-08T10:16:09Z|00049|memory_trim|INFO|Detected inactivity (last active 30025 ms ago): trimming memory
Oct  8 06:16:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:09.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:10 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v947: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 8.0 KiB/s wr, 135 op/s
Oct  8 06:16:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:10 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:10 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:16:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:10.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:16:11 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:11 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:11.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v948: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 6.0 KiB/s wr, 1 op/s
Oct  8 06:16:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:12 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:12 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:16:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:12.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:16:13 np0005475493 nova_compute[262220]: 2025-10-08 10:16:13.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:13 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:13 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:13 np0005475493 nova_compute[262220]: 2025-10-08 10:16:13.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:13.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:16:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v949: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 7.3 KiB/s wr, 2 op/s
Oct  8 06:16:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:14 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:16:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:14.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:16:14 np0005475493 podman[277302]: 2025-10-08 10:16:14.94346933 +0000 UTC m=+0.106545883 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  8 06:16:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:15 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:15.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:16:15] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct  8 06:16:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:16:15] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct  8 06:16:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v950: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.0 KiB/s wr, 1 op/s
Oct  8 06:16:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:16 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:16 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:16:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:16.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:16:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:16:17.165Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:16:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:17 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:17.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:16:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:16:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:17 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:16:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:16:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:16:18 np0005475493 nova_compute[262220]: 2025-10-08 10:16:18.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:16:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:16:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:16:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:16:18 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v951: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.0 KiB/s wr, 1 op/s
Oct  8 06:16:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:18 np0005475493 nova_compute[262220]: 2025-10-08 10:16:18.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:18 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:16:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:18.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:16:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:16:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:19 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:19.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:20 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v952: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 8.3 KiB/s wr, 2 op/s
Oct  8 06:16:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:20 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:20 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:20.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:21 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:21 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:21.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:21 np0005475493 podman[277336]: 2025-10-08 10:16:21.898786971 +0000 UTC m=+0.056464380 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  8 06:16:21 np0005475493 podman[277337]: 2025-10-08 10:16:21.919918222 +0000 UTC m=+0.076839367 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  8 06:16:22 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v953: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 2.7 KiB/s wr, 1 op/s
Oct  8 06:16:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:22 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:22 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:16:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:22.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:16:23 np0005475493 nova_compute[262220]: 2025-10-08 10:16:23.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:23 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:23 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:23 np0005475493 nova_compute[262220]: 2025-10-08 10:16:23.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:16:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:23.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:16:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:16:24 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v954: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 14 KiB/s wr, 3 op/s
Oct  8 06:16:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:24 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:24.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:25 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:25.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:16:25] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Oct  8 06:16:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:16:25] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Oct  8 06:16:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v955: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 13 KiB/s wr, 2 op/s
Oct  8 06:16:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:26 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:26 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:26.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:16:27.166Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:16:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:16:27.166Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:16:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:16:27.166Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:16:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:27 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:16:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:27.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:16:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:27 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:16:28 np0005475493 nova_compute[262220]: 2025-10-08 10:16:28.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:28 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v956: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 13 KiB/s wr, 2 op/s
Oct  8 06:16:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:28 np0005475493 nova_compute[262220]: 2025-10-08 10:16:28.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:28 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac70004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:28 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:28.839 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  8 06:16:28 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:28.840 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  8 06:16:28 np0005475493 nova_compute[262220]: 2025-10-08 10:16:28.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:28.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:16:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:29 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac7c003d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:29.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:30 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v957: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 15 KiB/s wr, 31 op/s
Oct  8 06:16:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:30 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:30 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:16:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:30.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:16:31 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:31 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac780031c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.274 2 DEBUG oslo_concurrency.lockutils [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "interface-ea469a2e-bf09-495c-9b5e-02ad38d32d40-79d28498-fe9d-49dc-ad2c-bde432b239db" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.275 2 DEBUG oslo_concurrency.lockutils [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "interface-ea469a2e-bf09-495c-9b5e-02ad38d32d40-79d28498-fe9d-49dc-ad2c-bde432b239db" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.303 2 DEBUG nova.objects.instance [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'flavor' on Instance uuid ea469a2e-bf09-495c-9b5e-02ad38d32d40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.561 2 DEBUG nova.virt.libvirt.vif [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-08T10:14:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1473882269',display_name='tempest-TestNetworkBasicOps-server-1473882269',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1473882269',id=6,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLR6MyJBOMYp2wYGaL6G2mEbJcrZztqNeLLTecSXpMa+10UdVkrFMZxX+qiOC0ccFZIJleCiHIYc6JXNFg7vRmgJ0JtTU6W+KAYc8u1JxRO51IwGJ30ByO68fx1sTbOqEg==',key_name='tempest-TestNetworkBasicOps-1715641436',keypairs=<?>,launch_index=0,launched_at=2025-10-08T10:14:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-b2owsx2f',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T10:14:35Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=ea469a2e-bf09-495c-9b5e-02ad38d32d40,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.562 2 DEBUG nova.network.os_vif_util [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.562 2 DEBUG nova.network.os_vif_util [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:4d:66,bridge_name='br-int',has_traffic_filtering=True,id=79d28498-fe9d-49dc-ad2c-bde432b239db,network=Network(0a28a475-c59d-4526-93af-b8af40052e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d28498-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.567 2 DEBUG nova.virt.libvirt.guest [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:40:4d:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap79d28498-fe"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.569 2 DEBUG nova.virt.libvirt.guest [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:40:4d:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap79d28498-fe"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.572 2 DEBUG nova.virt.libvirt.driver [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Attempting to detach device tap79d28498-fe from instance ea469a2e-bf09-495c-9b5e-02ad38d32d40 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.572 2 DEBUG nova.virt.libvirt.guest [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] detach device xml: <interface type="ethernet">
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <mac address="fa:16:3e:40:4d:66"/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <model type="virtio"/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <driver name="vhost" rx_queue_size="512"/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <mtu size="1442"/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <target dev="tap79d28498-fe"/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]: </interface>
Oct  8 06:16:31 np0005475493 nova_compute[262220]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.582 2 DEBUG nova.virt.libvirt.guest [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:40:4d:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap79d28498-fe"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.585 2 DEBUG nova.virt.libvirt.guest [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:40:4d:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap79d28498-fe"/></interface>not found in domain: <domain type='kvm' id='2'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <name>instance-00000006</name>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <uuid>ea469a2e-bf09-495c-9b5e-02ad38d32d40</uuid>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <metadata>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <nova:name>tempest-TestNetworkBasicOps-server-1473882269</nova:name>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <nova:creationTime>2025-10-08 10:15:03</nova:creationTime>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <nova:flavor name="m1.nano">
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:memory>128</nova:memory>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:disk>1</nova:disk>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:swap>0</nova:swap>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:ephemeral>0</nova:ephemeral>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:vcpus>1</nova:vcpus>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </nova:flavor>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <nova:owner>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:user uuid="d50b19166a7245e390a6e29682191263">tempest-TestNetworkBasicOps-139500885-project-member</nova:user>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:project uuid="0edb2c85b88b4b168a4b2e8c5ed4a05c">tempest-TestNetworkBasicOps-139500885</nova:project>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </nova:owner>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <nova:root type="image" uuid="e5994bac-385d-4cfe-962e-386aa0559983"/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <nova:ports>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:port uuid="be4ec274-2a90-48e8-bd51-fd01f3c659da">
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </nova:port>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:port uuid="79d28498-fe9d-49dc-ad2c-bde432b239db">
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </nova:port>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </nova:ports>
Oct  8 06:16:31 np0005475493 nova_compute[262220]: </nova:instance>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </metadata>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <memory unit='KiB'>131072</memory>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <vcpu placement='static'>1</vcpu>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <resource>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <partition>/machine</partition>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </resource>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <sysinfo type='smbios'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <system>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <entry name='manufacturer'>RDO</entry>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <entry name='product'>OpenStack Compute</entry>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <entry name='serial'>ea469a2e-bf09-495c-9b5e-02ad38d32d40</entry>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <entry name='uuid'>ea469a2e-bf09-495c-9b5e-02ad38d32d40</entry>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <entry name='family'>Virtual Machine</entry>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </system>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </sysinfo>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <os>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <boot dev='hd'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <smbios mode='sysinfo'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </os>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <features>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <acpi/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <apic/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <vmcoreinfo state='on'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </features>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <cpu mode='custom' match='exact' check='full'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <vendor>AMD</vendor>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='x2apic'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='tsc-deadline'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='hypervisor'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='tsc_adjust'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='spec-ctrl'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='stibp'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='arch-capabilities'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='ssbd'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='cmp_legacy'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='overflow-recov'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='succor'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='ibrs'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='amd-ssbd'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='virt-ssbd'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='disable' name='lbrv'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='disable' name='tsc-scale'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='disable' name='vmcb-clean'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='disable' name='flushbyasid'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='disable' name='pause-filter'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='disable' name='pfthreshold'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='rdctl-no'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='mds-no'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='pschange-mc-no'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='gds-no'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='rfds-no'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='disable' name='xsaves'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='disable' name='svm'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='topoext'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='disable' name='npt'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='disable' name='nrip-save'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </cpu>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <clock offset='utc'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <timer name='pit' tickpolicy='delay'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <timer name='hpet' present='no'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </clock>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <on_poweroff>destroy</on_poweroff>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <on_reboot>restart</on_reboot>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <on_crash>destroy</on_crash>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <devices>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <disk type='network' device='disk'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <driver name='qemu' type='raw' cache='none'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <auth username='openstack'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:        <secret type='ceph' uuid='787292cc-8154-50c4-9e00-e9be3e817149'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      </auth>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <source protocol='rbd' name='vms/ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk' index='2'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:        <host name='192.168.122.100' port='6789'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:        <host name='192.168.122.102' port='6789'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:        <host name='192.168.122.101' port='6789'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      </source>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target dev='vda' bus='virtio'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='virtio-disk0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </disk>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <disk type='network' device='cdrom'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <driver name='qemu' type='raw' cache='none'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <auth username='openstack'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:        <secret type='ceph' uuid='787292cc-8154-50c4-9e00-e9be3e817149'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      </auth>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <source protocol='rbd' name='vms/ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk.config' index='1'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:        <host name='192.168.122.100' port='6789'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:        <host name='192.168.122.102' port='6789'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:        <host name='192.168.122.101' port='6789'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      </source>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target dev='sda' bus='sata'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <readonly/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='sata0-0-0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </disk>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='0' model='pcie-root'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pcie.0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='1' port='0x10'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.1'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='2' port='0x11'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.2'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='3' port='0x12'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.3'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='4' port='0x13'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.4'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='5' port='0x14'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.5'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='6' port='0x15'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.6'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='7' port='0x16'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.7'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='8' port='0x17'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.8'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='9' port='0x18'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.9'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='10' port='0x19'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.10'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='11' port='0x1a'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.11'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='12' port='0x1b'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.12'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='13' port='0x1c'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.13'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='14' port='0x1d'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.14'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='15' port='0x1e'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.15'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='16' port='0x1f'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.16'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='17' port='0x20'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.17'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='18' port='0x21'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.18'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='19' port='0x22'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.19'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='20' port='0x23'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.20'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='21' port='0x24'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.21'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='22' port='0x25'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.22'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='23' port='0x26'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.23'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='24' port='0x27'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.24'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='25' port='0x28'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.25'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-pci-bridge'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.26'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='usb'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='sata' index='0'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='ide'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <interface type='ethernet'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <mac address='fa:16:3e:e6:b0:e0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target dev='tapbe4ec274-2a'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model type='virtio'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <driver name='vhost' rx_queue_size='512'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <mtu size='1442'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='net0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </interface>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <interface type='ethernet'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <mac address='fa:16:3e:40:4d:66'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target dev='tap79d28498-fe'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model type='virtio'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <driver name='vhost' rx_queue_size='512'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <mtu size='1442'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='net1'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </interface>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <serial type='pty'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <source path='/dev/pts/0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <log file='/var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/console.log' append='off'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target type='isa-serial' port='0'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:        <model name='isa-serial'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      </target>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='serial0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </serial>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <console type='pty' tty='/dev/pts/0'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <source path='/dev/pts/0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <log file='/var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/console.log' append='off'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target type='serial' port='0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='serial0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </console>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <input type='tablet' bus='usb'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='input0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='usb' bus='0' port='1'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </input>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <input type='mouse' bus='ps2'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='input1'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </input>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <input type='keyboard' bus='ps2'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='input2'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </input>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <listen type='address' address='::0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </graphics>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <audio id='1' type='none'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <video>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model type='virtio' heads='1' primary='yes'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='video0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </video>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <watchdog model='itco' action='reset'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='watchdog0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </watchdog>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <memballoon model='virtio'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <stats period='10'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='balloon0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </memballoon>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <rng model='virtio'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <backend model='random'>/dev/urandom</backend>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='rng0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </rng>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </devices>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <label>system_u:system_r:svirt_t:s0:c144,c208</label>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c144,c208</imagelabel>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </seclabel>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <label>+107:+107</label>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <imagelabel>+107:+107</imagelabel>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </seclabel>
Oct  8 06:16:31 np0005475493 nova_compute[262220]: </domain>
Oct  8 06:16:31 np0005475493 nova_compute[262220]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.588 2 INFO nova.virt.libvirt.driver [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Successfully detached device tap79d28498-fe from instance ea469a2e-bf09-495c-9b5e-02ad38d32d40 from the persistent domain config.#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.588 2 DEBUG nova.virt.libvirt.driver [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] (1/8): Attempting to detach device tap79d28498-fe with device alias net1 from instance ea469a2e-bf09-495c-9b5e-02ad38d32d40 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.589 2 DEBUG nova.virt.libvirt.guest [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] detach device xml: <interface type="ethernet">
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <mac address="fa:16:3e:40:4d:66"/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <model type="virtio"/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <driver name="vhost" rx_queue_size="512"/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <mtu size="1442"/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <target dev="tap79d28498-fe"/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]: </interface>
Oct  8 06:16:31 np0005475493 nova_compute[262220]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  8 06:16:31 np0005475493 kernel: tap79d28498-fe (unregistering): left promiscuous mode
Oct  8 06:16:31 np0005475493 NetworkManager[44872]: <info>  [1759918591.6477] device (tap79d28498-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  8 06:16:31 np0005475493 ovn_controller[153187]: 2025-10-08T10:16:31Z|00050|binding|INFO|Releasing lport 79d28498-fe9d-49dc-ad2c-bde432b239db from this chassis (sb_readonly=0)
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:31 np0005475493 ovn_controller[153187]: 2025-10-08T10:16:31Z|00051|binding|INFO|Setting lport 79d28498-fe9d-49dc-ad2c-bde432b239db down in Southbound
Oct  8 06:16:31 np0005475493 ovn_controller[153187]: 2025-10-08T10:16:31Z|00052|binding|INFO|Removing iface tap79d28498-fe ovn-installed in OVS
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:31 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.668 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:4d:66 10.100.0.23', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': 'ea469a2e-bf09-495c-9b5e-02ad38d32d40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a28a475-c59d-4526-93af-b8af40052e5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f6ba97cc-1c15-47ba-aa89-c964fcf23523, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], logical_port=79d28498-fe9d-49dc-ad2c-bde432b239db) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  8 06:16:31 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.669 163175 INFO neutron.agent.ovn.metadata.agent [-] Port 79d28498-fe9d-49dc-ad2c-bde432b239db in datapath 0a28a475-c59d-4526-93af-b8af40052e5c unbound from our chassis#033[00m
Oct  8 06:16:31 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.670 163175 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0a28a475-c59d-4526-93af-b8af40052e5c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  8 06:16:31 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.672 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[fab4d5d7-cadc-4724-b9c2-7d7970a53a8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:16:31 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.672 163175 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c namespace which is not needed anymore#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.675 2 DEBUG nova.virt.libvirt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Received event <DeviceRemovedEvent: 1759918591.6748781, ea469a2e-bf09-495c-9b5e-02ad38d32d40 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.677 2 DEBUG nova.virt.libvirt.driver [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Start waiting for the detach event from libvirt for device tap79d28498-fe with device alias net1 for instance ea469a2e-bf09-495c-9b5e-02ad38d32d40 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.677 2 DEBUG nova.virt.libvirt.guest [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:40:4d:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap79d28498-fe"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  8 06:16:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:31.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.687 2 DEBUG nova.virt.libvirt.guest [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:40:4d:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap79d28498-fe"/></interface>not found in domain: <domain type='kvm' id='2'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <name>instance-00000006</name>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <uuid>ea469a2e-bf09-495c-9b5e-02ad38d32d40</uuid>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <metadata>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <nova:name>tempest-TestNetworkBasicOps-server-1473882269</nova:name>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <nova:creationTime>2025-10-08 10:15:03</nova:creationTime>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <nova:flavor name="m1.nano">
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:memory>128</nova:memory>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:disk>1</nova:disk>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:swap>0</nova:swap>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:ephemeral>0</nova:ephemeral>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:vcpus>1</nova:vcpus>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </nova:flavor>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <nova:owner>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:user uuid="d50b19166a7245e390a6e29682191263">tempest-TestNetworkBasicOps-139500885-project-member</nova:user>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:project uuid="0edb2c85b88b4b168a4b2e8c5ed4a05c">tempest-TestNetworkBasicOps-139500885</nova:project>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </nova:owner>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <nova:root type="image" uuid="e5994bac-385d-4cfe-962e-386aa0559983"/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <nova:ports>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:port uuid="be4ec274-2a90-48e8-bd51-fd01f3c659da">
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </nova:port>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:port uuid="79d28498-fe9d-49dc-ad2c-bde432b239db">
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </nova:port>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </nova:ports>
Oct  8 06:16:31 np0005475493 nova_compute[262220]: </nova:instance>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </metadata>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <memory unit='KiB'>131072</memory>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <vcpu placement='static'>1</vcpu>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <resource>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <partition>/machine</partition>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </resource>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <sysinfo type='smbios'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <system>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <entry name='manufacturer'>RDO</entry>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <entry name='product'>OpenStack Compute</entry>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <entry name='serial'>ea469a2e-bf09-495c-9b5e-02ad38d32d40</entry>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <entry name='uuid'>ea469a2e-bf09-495c-9b5e-02ad38d32d40</entry>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <entry name='family'>Virtual Machine</entry>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </system>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </sysinfo>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <os>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <boot dev='hd'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <smbios mode='sysinfo'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </os>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <features>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <acpi/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <apic/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <vmcoreinfo state='on'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </features>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <cpu mode='custom' match='exact' check='full'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <vendor>AMD</vendor>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='x2apic'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='tsc-deadline'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='hypervisor'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='tsc_adjust'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='spec-ctrl'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='stibp'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='arch-capabilities'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='ssbd'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='cmp_legacy'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='overflow-recov'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='succor'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='ibrs'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='amd-ssbd'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='virt-ssbd'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='disable' name='lbrv'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='disable' name='tsc-scale'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='disable' name='vmcb-clean'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='disable' name='flushbyasid'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='disable' name='pause-filter'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='disable' name='pfthreshold'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='rdctl-no'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='mds-no'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='pschange-mc-no'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='gds-no'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='rfds-no'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='disable' name='xsaves'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='disable' name='svm'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='require' name='topoext'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='disable' name='npt'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <feature policy='disable' name='nrip-save'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </cpu>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <clock offset='utc'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <timer name='pit' tickpolicy='delay'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <timer name='hpet' present='no'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </clock>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <on_poweroff>destroy</on_poweroff>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <on_reboot>restart</on_reboot>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <on_crash>destroy</on_crash>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <devices>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <disk type='network' device='disk'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <driver name='qemu' type='raw' cache='none'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <auth username='openstack'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:        <secret type='ceph' uuid='787292cc-8154-50c4-9e00-e9be3e817149'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      </auth>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <source protocol='rbd' name='vms/ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk' index='2'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:        <host name='192.168.122.100' port='6789'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:        <host name='192.168.122.102' port='6789'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:        <host name='192.168.122.101' port='6789'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      </source>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target dev='vda' bus='virtio'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='virtio-disk0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </disk>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <disk type='network' device='cdrom'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <driver name='qemu' type='raw' cache='none'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <auth username='openstack'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:        <secret type='ceph' uuid='787292cc-8154-50c4-9e00-e9be3e817149'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      </auth>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <source protocol='rbd' name='vms/ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk.config' index='1'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:        <host name='192.168.122.100' port='6789'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:        <host name='192.168.122.102' port='6789'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:        <host name='192.168.122.101' port='6789'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      </source>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target dev='sda' bus='sata'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <readonly/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='sata0-0-0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </disk>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='0' model='pcie-root'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pcie.0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='1' port='0x10'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.1'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='2' port='0x11'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.2'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='3' port='0x12'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.3'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='4' port='0x13'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.4'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='5' port='0x14'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.5'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='6' port='0x15'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.6'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='7' port='0x16'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.7'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='8' port='0x17'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.8'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='9' port='0x18'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.9'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='10' port='0x19'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.10'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='11' port='0x1a'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.11'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='12' port='0x1b'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.12'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='13' port='0x1c'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.13'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='14' port='0x1d'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.14'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='15' port='0x1e'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.15'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='16' port='0x1f'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.16'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='17' port='0x20'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.17'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='18' port='0x21'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.18'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='19' port='0x22'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.19'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='20' port='0x23'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.20'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='21' port='0x24'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.21'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='22' port='0x25'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.22'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='23' port='0x26'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.23'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='24' port='0x27'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.24'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target chassis='25' port='0x28'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.25'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model name='pcie-pci-bridge'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='pci.26'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='usb'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <controller type='sata' index='0'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='ide'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <interface type='ethernet'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <mac address='fa:16:3e:e6:b0:e0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target dev='tapbe4ec274-2a'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model type='virtio'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <driver name='vhost' rx_queue_size='512'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <mtu size='1442'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='net0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </interface>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <serial type='pty'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <source path='/dev/pts/0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <log file='/var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/console.log' append='off'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target type='isa-serial' port='0'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:        <model name='isa-serial'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      </target>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='serial0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </serial>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <console type='pty' tty='/dev/pts/0'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <source path='/dev/pts/0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <log file='/var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/console.log' append='off'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <target type='serial' port='0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='serial0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </console>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <input type='tablet' bus='usb'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='input0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='usb' bus='0' port='1'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </input>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <input type='mouse' bus='ps2'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='input1'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </input>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <input type='keyboard' bus='ps2'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='input2'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </input>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <listen type='address' address='::0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </graphics>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <audio id='1' type='none'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <video>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <model type='virtio' heads='1' primary='yes'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='video0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </video>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <watchdog model='itco' action='reset'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='watchdog0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </watchdog>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <memballoon model='virtio'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <stats period='10'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='balloon0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </memballoon>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <rng model='virtio'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <backend model='random'>/dev/urandom</backend>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <alias name='rng0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </rng>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </devices>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <label>system_u:system_r:svirt_t:s0:c144,c208</label>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c144,c208</imagelabel>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </seclabel>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <label>+107:+107</label>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <imagelabel>+107:+107</imagelabel>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </seclabel>
Oct  8 06:16:31 np0005475493 nova_compute[262220]: </domain>
Oct  8 06:16:31 np0005475493 nova_compute[262220]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.687 2 INFO nova.virt.libvirt.driver [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Successfully detached device tap79d28498-fe from instance ea469a2e-bf09-495c-9b5e-02ad38d32d40 from the live domain config.#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.688 2 DEBUG nova.virt.libvirt.vif [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-08T10:14:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1473882269',display_name='tempest-TestNetworkBasicOps-server-1473882269',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1473882269',id=6,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLR6MyJBOMYp2wYGaL6G2mEbJcrZztqNeLLTecSXpMa+10UdVkrFMZxX+qiOC0ccFZIJleCiHIYc6JXNFg7vRmgJ0JtTU6W+KAYc8u1JxRO51IwGJ30ByO68fx1sTbOqEg==',key_name='tempest-TestNetworkBasicOps-1715641436',keypairs=<?>,launch_index=0,launched_at=2025-10-08T10:14:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-b2owsx2f',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T10:14:35Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=ea469a2e-bf09-495c-9b5e-02ad38d32d40,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.688 2 DEBUG nova.network.os_vif_util [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.689 2 DEBUG nova.network.os_vif_util [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:4d:66,bridge_name='br-int',has_traffic_filtering=True,id=79d28498-fe9d-49dc-ad2c-bde432b239db,network=Network(0a28a475-c59d-4526-93af-b8af40052e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d28498-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.689 2 DEBUG os_vif [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:4d:66,bridge_name='br-int',has_traffic_filtering=True,id=79d28498-fe9d-49dc-ad2c-bde432b239db,network=Network(0a28a475-c59d-4526-93af-b8af40052e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d28498-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.691 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79d28498-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.699 2 INFO os_vif [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:4d:66,bridge_name='br-int',has_traffic_filtering=True,id=79d28498-fe9d-49dc-ad2c-bde432b239db,network=Network(0a28a475-c59d-4526-93af-b8af40052e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d28498-fe')#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.700 2 DEBUG nova.virt.libvirt.guest [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <nova:name>tempest-TestNetworkBasicOps-server-1473882269</nova:name>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <nova:creationTime>2025-10-08 10:16:31</nova:creationTime>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <nova:flavor name="m1.nano">
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:memory>128</nova:memory>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:disk>1</nova:disk>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:swap>0</nova:swap>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:ephemeral>0</nova:ephemeral>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:vcpus>1</nova:vcpus>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </nova:flavor>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <nova:owner>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:user uuid="d50b19166a7245e390a6e29682191263">tempest-TestNetworkBasicOps-139500885-project-member</nova:user>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:project uuid="0edb2c85b88b4b168a4b2e8c5ed4a05c">tempest-TestNetworkBasicOps-139500885</nova:project>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </nova:owner>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <nova:root type="image" uuid="e5994bac-385d-4cfe-962e-386aa0559983"/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  <nova:ports>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    <nova:port uuid="be4ec274-2a90-48e8-bd51-fd01f3c659da">
Oct  8 06:16:31 np0005475493 nova_compute[262220]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:    </nova:port>
Oct  8 06:16:31 np0005475493 nova_compute[262220]:  </nova:ports>
Oct  8 06:16:31 np0005475493 nova_compute[262220]: </nova:instance>
Oct  8 06:16:31 np0005475493 nova_compute[262220]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  8 06:16:31 np0005475493 neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c[274732]: [NOTICE]   (274736) : haproxy version is 2.8.14-c23fe91
Oct  8 06:16:31 np0005475493 neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c[274732]: [NOTICE]   (274736) : path to executable is /usr/sbin/haproxy
Oct  8 06:16:31 np0005475493 neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c[274732]: [WARNING]  (274736) : Exiting Master process...
Oct  8 06:16:31 np0005475493 neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c[274732]: [WARNING]  (274736) : Exiting Master process...
Oct  8 06:16:31 np0005475493 neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c[274732]: [ALERT]    (274736) : Current worker (274738) exited with code 143 (Terminated)
Oct  8 06:16:31 np0005475493 neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c[274732]: [WARNING]  (274736) : All workers exited. Exiting... (0)
Oct  8 06:16:31 np0005475493 systemd[1]: libpod-3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52.scope: Deactivated successfully.
Oct  8 06:16:31 np0005475493 podman[277520]: 2025-10-08 10:16:31.812334229 +0000 UTC m=+0.044764123 container died 3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  8 06:16:31 np0005475493 systemd[1]: var-lib-containers-storage-overlay-915dac930a5508f0d71bb51887deafacf6554c7ddc11a4e1d1f27258efcfd64d-merged.mount: Deactivated successfully.
Oct  8 06:16:31 np0005475493 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52-userdata-shm.mount: Deactivated successfully.
Oct  8 06:16:31 np0005475493 podman[277520]: 2025-10-08 10:16:31.846278833 +0000 UTC m=+0.078708717 container cleanup 3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  8 06:16:31 np0005475493 systemd[1]: libpod-conmon-3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52.scope: Deactivated successfully.
Oct  8 06:16:31 np0005475493 podman[277551]: 2025-10-08 10:16:31.909401876 +0000 UTC m=+0.038463340 container remove 3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  8 06:16:31 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.915 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[caf0d0e0-1ba2-4ffe-adf0-6c6dce7bab52]: (4, ('Wed Oct  8 10:16:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c (3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52)\n3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52\nWed Oct  8 10:16:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c (3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52)\n3f635e95f7ebf91f0c654612ee8273fdabac0234b29364d136bfe358d73eeb52\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:16:31 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.917 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[ddece913-440d-499e-8778-cfab1074f04f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:16:31 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.918 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a28a475-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:31 np0005475493 kernel: tap0a28a475-c0: left promiscuous mode
Oct  8 06:16:31 np0005475493 nova_compute[262220]: 2025-10-08 10:16:31.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:31 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.936 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[d9c9ede8-8e32-415d-9966-73c0b8d03730]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:16:31 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.969 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[cb67bbcf-a900-4035-aab9-80820c7da0b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:16:31 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.970 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[07bb8549-ffd7-4734-bb9f-95351cd8bf23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:16:31 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.985 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[1f8732dd-344d-41f5-8546-4dee305ec19e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446210, 'reachable_time': 19457, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277568, 'error': None, 'target': 'ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:16:31 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.987 163290 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0a28a475-c59d-4526-93af-b8af40052e5c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  8 06:16:31 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:31.987 163290 DEBUG oslo.privsep.daemon [-] privsep: reply[57b8dd5e-b1f5-40a0-aa27-129dded65275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:16:31 np0005475493 systemd[1]: run-netns-ovnmeta\x2d0a28a475\x2dc59d\x2d4526\x2d93af\x2db8af40052e5c.mount: Deactivated successfully.
Oct  8 06:16:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v958: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 30 op/s
Oct  8 06:16:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_47] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:32 np0005475493 nova_compute[262220]: 2025-10-08 10:16:32.617 2 DEBUG nova.compute.manager [req-b1c02e40-bc0b-4a93-a253-c648a6dbd9fd req-61327ef6-759f-41e0-9363-1780ab676776 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-vif-unplugged-79d28498-fe9d-49dc-ad2c-bde432b239db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:16:32 np0005475493 nova_compute[262220]: 2025-10-08 10:16:32.618 2 DEBUG oslo_concurrency.lockutils [req-b1c02e40-bc0b-4a93-a253-c648a6dbd9fd req-61327ef6-759f-41e0-9363-1780ab676776 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:16:32 np0005475493 nova_compute[262220]: 2025-10-08 10:16:32.618 2 DEBUG oslo_concurrency.lockutils [req-b1c02e40-bc0b-4a93-a253-c648a6dbd9fd req-61327ef6-759f-41e0-9363-1780ab676776 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:16:32 np0005475493 nova_compute[262220]: 2025-10-08 10:16:32.618 2 DEBUG oslo_concurrency.lockutils [req-b1c02e40-bc0b-4a93-a253-c648a6dbd9fd req-61327ef6-759f-41e0-9363-1780ab676776 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:16:32 np0005475493 nova_compute[262220]: 2025-10-08 10:16:32.618 2 DEBUG nova.compute.manager [req-b1c02e40-bc0b-4a93-a253-c648a6dbd9fd req-61327ef6-759f-41e0-9363-1780ab676776 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] No waiting events found dispatching network-vif-unplugged-79d28498-fe9d-49dc-ad2c-bde432b239db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  8 06:16:32 np0005475493 nova_compute[262220]: 2025-10-08 10:16:32.618 2 WARNING nova.compute.manager [req-b1c02e40-bc0b-4a93-a253-c648a6dbd9fd req-61327ef6-759f-41e0-9363-1780ab676776 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received unexpected event network-vif-unplugged-79d28498-fe9d-49dc-ad2c-bde432b239db for instance with vm_state active and task_state None.#033[00m
Oct  8 06:16:32 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:32 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_48] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:32.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:16:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:16:33 np0005475493 nova_compute[262220]: 2025-10-08 10:16:33.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 06:16:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:16:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 06:16:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:16:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:16:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:16:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:16:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:16:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:16:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:16:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:16:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:16:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:16:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:16:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:16:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:16:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:16:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:16:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:33 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:33 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:16:33 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:16:33 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:16:33 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:16:33 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:16:33 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:16:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:33.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:33 np0005475493 podman[277661]: 2025-10-08 10:16:33.705580928 +0000 UTC m=+0.037775518 container create 712f06e51e4b6857932db64cb5dc9bca2763116081b18fb05eb8624fcec0848d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_ritchie, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True)
Oct  8 06:16:33 np0005475493 systemd[1]: Started libpod-conmon-712f06e51e4b6857932db64cb5dc9bca2763116081b18fb05eb8624fcec0848d.scope.
Oct  8 06:16:33 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:16:33 np0005475493 podman[277661]: 2025-10-08 10:16:33.690499402 +0000 UTC m=+0.022694022 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:16:33 np0005475493 podman[277661]: 2025-10-08 10:16:33.799682149 +0000 UTC m=+0.131876789 container init 712f06e51e4b6857932db64cb5dc9bca2763116081b18fb05eb8624fcec0848d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_ritchie, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:16:33 np0005475493 podman[277661]: 2025-10-08 10:16:33.807116218 +0000 UTC m=+0.139310818 container start 712f06e51e4b6857932db64cb5dc9bca2763116081b18fb05eb8624fcec0848d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_ritchie, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:16:33 np0005475493 podman[277661]: 2025-10-08 10:16:33.811730528 +0000 UTC m=+0.143925138 container attach 712f06e51e4b6857932db64cb5dc9bca2763116081b18fb05eb8624fcec0848d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct  8 06:16:33 np0005475493 systemd[1]: libpod-712f06e51e4b6857932db64cb5dc9bca2763116081b18fb05eb8624fcec0848d.scope: Deactivated successfully.
Oct  8 06:16:33 np0005475493 jolly_ritchie[277678]: 167 167
Oct  8 06:16:33 np0005475493 conmon[277678]: conmon 712f06e51e4b6857932d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-712f06e51e4b6857932db64cb5dc9bca2763116081b18fb05eb8624fcec0848d.scope/container/memory.events
Oct  8 06:16:33 np0005475493 podman[277661]: 2025-10-08 10:16:33.81646867 +0000 UTC m=+0.148663270 container died 712f06e51e4b6857932db64cb5dc9bca2763116081b18fb05eb8624fcec0848d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Oct  8 06:16:33 np0005475493 systemd[1]: var-lib-containers-storage-overlay-c0f930eedc0f7d6ee79c4a904a12b27cbdab2155508350316bed0e98ba93c1e9-merged.mount: Deactivated successfully.
Oct  8 06:16:33 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:33.841 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:16:33 np0005475493 podman[277661]: 2025-10-08 10:16:33.856665025 +0000 UTC m=+0.188859625 container remove 712f06e51e4b6857932db64cb5dc9bca2763116081b18fb05eb8624fcec0848d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  8 06:16:33 np0005475493 systemd[1]: libpod-conmon-712f06e51e4b6857932db64cb5dc9bca2763116081b18fb05eb8624fcec0848d.scope: Deactivated successfully.
Oct  8 06:16:33 np0005475493 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  8 06:16:34 np0005475493 podman[277704]: 2025-10-08 10:16:34.032235551 +0000 UTC m=+0.045353502 container create b1fa6ca03c6872a01e3c120f930f7bce6438087f0305012074546fdd8e4d821e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_kowalevski, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  8 06:16:34 np0005475493 podman[277699]: 2025-10-08 10:16:34.04897509 +0000 UTC m=+0.069283333 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.vendor=CentOS)
Oct  8 06:16:34 np0005475493 systemd[1]: Started libpod-conmon-b1fa6ca03c6872a01e3c120f930f7bce6438087f0305012074546fdd8e4d821e.scope.
Oct  8 06:16:34 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:16:34 np0005475493 podman[277704]: 2025-10-08 10:16:34.010532742 +0000 UTC m=+0.023650713 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:16:34 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90cef5ba6bf13bc347c5130c083faf53540bca497cc827955f4074f835f64584/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:16:34 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90cef5ba6bf13bc347c5130c083faf53540bca497cc827955f4074f835f64584/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:16:34 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90cef5ba6bf13bc347c5130c083faf53540bca497cc827955f4074f835f64584/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:16:34 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90cef5ba6bf13bc347c5130c083faf53540bca497cc827955f4074f835f64584/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:16:34 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90cef5ba6bf13bc347c5130c083faf53540bca497cc827955f4074f835f64584/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:16:34 np0005475493 podman[277704]: 2025-10-08 10:16:34.121858448 +0000 UTC m=+0.134976409 container init b1fa6ca03c6872a01e3c120f930f7bce6438087f0305012074546fdd8e4d821e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_kowalevski, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:16:34 np0005475493 podman[277704]: 2025-10-08 10:16:34.134089332 +0000 UTC m=+0.147207283 container start b1fa6ca03c6872a01e3c120f930f7bce6438087f0305012074546fdd8e4d821e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_kowalevski, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct  8 06:16:34 np0005475493 podman[277704]: 2025-10-08 10:16:34.137093139 +0000 UTC m=+0.150211100 container attach b1fa6ca03c6872a01e3c120f930f7bce6438087f0305012074546fdd8e4d821e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_kowalevski, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:16:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:16:34 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v959: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 30 op/s
Oct  8 06:16:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac780031c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.434 2 DEBUG oslo_concurrency.lockutils [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.434 2 DEBUG oslo_concurrency.lockutils [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquired lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.435 2 DEBUG nova.network.neutron [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  8 06:16:34 np0005475493 goofy_kowalevski[277740]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:16:34 np0005475493 goofy_kowalevski[277740]: --> All data devices are unavailable
Oct  8 06:16:34 np0005475493 systemd[1]: libpod-b1fa6ca03c6872a01e3c120f930f7bce6438087f0305012074546fdd8e4d821e.scope: Deactivated successfully.
Oct  8 06:16:34 np0005475493 conmon[277740]: conmon b1fa6ca03c6872a01e3c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b1fa6ca03c6872a01e3c120f930f7bce6438087f0305012074546fdd8e4d821e.scope/container/memory.events
Oct  8 06:16:34 np0005475493 podman[277704]: 2025-10-08 10:16:34.484881832 +0000 UTC m=+0.497999783 container died b1fa6ca03c6872a01e3c120f930f7bce6438087f0305012074546fdd8e4d821e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_kowalevski, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct  8 06:16:34 np0005475493 systemd[1]: var-lib-containers-storage-overlay-90cef5ba6bf13bc347c5130c083faf53540bca497cc827955f4074f835f64584-merged.mount: Deactivated successfully.
Oct  8 06:16:34 np0005475493 podman[277704]: 2025-10-08 10:16:34.527071631 +0000 UTC m=+0.540189582 container remove b1fa6ca03c6872a01e3c120f930f7bce6438087f0305012074546fdd8e4d821e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_kowalevski, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  8 06:16:34 np0005475493 systemd[1]: libpod-conmon-b1fa6ca03c6872a01e3c120f930f7bce6438087f0305012074546fdd8e4d821e.scope: Deactivated successfully.
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.705 2 DEBUG nova.compute.manager [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-vif-plugged-79d28498-fe9d-49dc-ad2c-bde432b239db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.706 2 DEBUG oslo_concurrency.lockutils [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.706 2 DEBUG oslo_concurrency.lockutils [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.706 2 DEBUG oslo_concurrency.lockutils [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.707 2 DEBUG nova.compute.manager [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] No waiting events found dispatching network-vif-plugged-79d28498-fe9d-49dc-ad2c-bde432b239db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.707 2 WARNING nova.compute.manager [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received unexpected event network-vif-plugged-79d28498-fe9d-49dc-ad2c-bde432b239db for instance with vm_state active and task_state None.#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.707 2 DEBUG nova.compute.manager [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-vif-deleted-79d28498-fe9d-49dc-ad2c-bde432b239db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.707 2 INFO nova.compute.manager [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Neutron deleted interface 79d28498-fe9d-49dc-ad2c-bde432b239db; detaching it from the instance and deleting it from the info cache#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.707 2 DEBUG nova.network.neutron [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.732 2 DEBUG nova.objects.instance [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lazy-loading 'system_metadata' on Instance uuid ea469a2e-bf09-495c-9b5e-02ad38d32d40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.760 2 DEBUG nova.objects.instance [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lazy-loading 'flavor' on Instance uuid ea469a2e-bf09-495c-9b5e-02ad38d32d40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  8 06:16:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:34 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_47] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.789 2 DEBUG nova.virt.libvirt.vif [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-08T10:14:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1473882269',display_name='tempest-TestNetworkBasicOps-server-1473882269',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1473882269',id=6,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLR6MyJBOMYp2wYGaL6G2mEbJcrZztqNeLLTecSXpMa+10UdVkrFMZxX+qiOC0ccFZIJleCiHIYc6JXNFg7vRmgJ0JtTU6W+KAYc8u1JxRO51IwGJ30ByO68fx1sTbOqEg==',key_name='tempest-TestNetworkBasicOps-1715641436',keypairs=<?>,launch_index=0,launched_at=2025-10-08T10:14:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-b2owsx2f',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T10:14:35Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=ea469a2e-bf09-495c-9b5e-02ad38d32d40,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.789 2 DEBUG nova.network.os_vif_util [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Converting VIF {"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.790 2 DEBUG nova.network.os_vif_util [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:4d:66,bridge_name='br-int',has_traffic_filtering=True,id=79d28498-fe9d-49dc-ad2c-bde432b239db,network=Network(0a28a475-c59d-4526-93af-b8af40052e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d28498-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.793 2 DEBUG nova.virt.libvirt.guest [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:40:4d:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap79d28498-fe"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.796 2 DEBUG nova.virt.libvirt.guest [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:40:4d:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap79d28498-fe"/></interface>not found in domain: <domain type='kvm' id='2'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <name>instance-00000006</name>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <uuid>ea469a2e-bf09-495c-9b5e-02ad38d32d40</uuid>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <metadata>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <nova:name>tempest-TestNetworkBasicOps-server-1473882269</nova:name>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <nova:creationTime>2025-10-08 10:16:31</nova:creationTime>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <nova:flavor name="m1.nano">
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:memory>128</nova:memory>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:disk>1</nova:disk>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:swap>0</nova:swap>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:ephemeral>0</nova:ephemeral>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:vcpus>1</nova:vcpus>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </nova:flavor>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <nova:owner>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:user uuid="d50b19166a7245e390a6e29682191263">tempest-TestNetworkBasicOps-139500885-project-member</nova:user>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:project uuid="0edb2c85b88b4b168a4b2e8c5ed4a05c">tempest-TestNetworkBasicOps-139500885</nova:project>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </nova:owner>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <nova:root type="image" uuid="e5994bac-385d-4cfe-962e-386aa0559983"/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <nova:ports>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:port uuid="be4ec274-2a90-48e8-bd51-fd01f3c659da">
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </nova:port>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </nova:ports>
Oct  8 06:16:34 np0005475493 nova_compute[262220]: </nova:instance>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </metadata>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <memory unit='KiB'>131072</memory>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <vcpu placement='static'>1</vcpu>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <resource>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <partition>/machine</partition>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </resource>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <sysinfo type='smbios'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <system>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <entry name='manufacturer'>RDO</entry>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <entry name='product'>OpenStack Compute</entry>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <entry name='serial'>ea469a2e-bf09-495c-9b5e-02ad38d32d40</entry>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <entry name='uuid'>ea469a2e-bf09-495c-9b5e-02ad38d32d40</entry>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <entry name='family'>Virtual Machine</entry>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </system>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </sysinfo>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <os>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <boot dev='hd'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <smbios mode='sysinfo'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </os>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <features>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <acpi/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <apic/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <vmcoreinfo state='on'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </features>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <cpu mode='custom' match='exact' check='full'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <vendor>AMD</vendor>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='x2apic'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='tsc-deadline'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='hypervisor'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='tsc_adjust'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='spec-ctrl'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='stibp'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='arch-capabilities'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='ssbd'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='cmp_legacy'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='overflow-recov'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='succor'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='ibrs'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='amd-ssbd'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='virt-ssbd'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='disable' name='lbrv'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='disable' name='tsc-scale'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='disable' name='vmcb-clean'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='disable' name='flushbyasid'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='disable' name='pause-filter'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='disable' name='pfthreshold'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='rdctl-no'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='mds-no'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='pschange-mc-no'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='gds-no'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='rfds-no'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='disable' name='xsaves'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='disable' name='svm'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='topoext'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='disable' name='npt'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='disable' name='nrip-save'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </cpu>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <clock offset='utc'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <timer name='pit' tickpolicy='delay'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <timer name='hpet' present='no'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </clock>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <on_poweroff>destroy</on_poweroff>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <on_reboot>restart</on_reboot>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <on_crash>destroy</on_crash>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <devices>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <disk type='network' device='disk'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <driver name='qemu' type='raw' cache='none'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <auth username='openstack'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:        <secret type='ceph' uuid='787292cc-8154-50c4-9e00-e9be3e817149'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      </auth>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <source protocol='rbd' name='vms/ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk' index='2'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:        <host name='192.168.122.100' port='6789'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:        <host name='192.168.122.102' port='6789'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:        <host name='192.168.122.101' port='6789'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      </source>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target dev='vda' bus='virtio'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='virtio-disk0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </disk>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <disk type='network' device='cdrom'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <driver name='qemu' type='raw' cache='none'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <auth username='openstack'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:        <secret type='ceph' uuid='787292cc-8154-50c4-9e00-e9be3e817149'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      </auth>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <source protocol='rbd' name='vms/ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk.config' index='1'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:        <host name='192.168.122.100' port='6789'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:        <host name='192.168.122.102' port='6789'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:        <host name='192.168.122.101' port='6789'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      </source>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target dev='sda' bus='sata'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <readonly/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='sata0-0-0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </disk>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='0' model='pcie-root'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pcie.0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='1' port='0x10'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.1'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='2' port='0x11'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.2'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='3' port='0x12'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.3'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='4' port='0x13'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.4'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='5' port='0x14'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.5'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='6' port='0x15'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.6'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='7' port='0x16'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.7'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='8' port='0x17'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.8'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='9' port='0x18'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.9'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='10' port='0x19'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.10'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='11' port='0x1a'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.11'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='12' port='0x1b'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.12'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='13' port='0x1c'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.13'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='14' port='0x1d'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.14'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='15' port='0x1e'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.15'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='16' port='0x1f'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.16'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='17' port='0x20'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.17'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='18' port='0x21'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.18'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='19' port='0x22'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.19'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='20' port='0x23'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.20'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='21' port='0x24'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.21'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='22' port='0x25'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.22'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='23' port='0x26'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.23'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='24' port='0x27'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.24'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='25' port='0x28'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.25'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-pci-bridge'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.26'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='usb'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='sata' index='0'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='ide'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <interface type='ethernet'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <mac address='fa:16:3e:e6:b0:e0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target dev='tapbe4ec274-2a'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model type='virtio'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <driver name='vhost' rx_queue_size='512'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <mtu size='1442'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='net0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </interface>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <serial type='pty'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <source path='/dev/pts/0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <log file='/var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/console.log' append='off'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target type='isa-serial' port='0'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:        <model name='isa-serial'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      </target>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='serial0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </serial>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <console type='pty' tty='/dev/pts/0'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <source path='/dev/pts/0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <log file='/var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/console.log' append='off'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target type='serial' port='0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='serial0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </console>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <input type='tablet' bus='usb'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='input0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='usb' bus='0' port='1'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </input>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <input type='mouse' bus='ps2'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='input1'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </input>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <input type='keyboard' bus='ps2'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='input2'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </input>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <listen type='address' address='::0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </graphics>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <audio id='1' type='none'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <video>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model type='virtio' heads='1' primary='yes'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='video0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </video>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <watchdog model='itco' action='reset'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='watchdog0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </watchdog>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <memballoon model='virtio'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <stats period='10'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='balloon0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </memballoon>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <rng model='virtio'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <backend model='random'>/dev/urandom</backend>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='rng0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </rng>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </devices>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <label>system_u:system_r:svirt_t:s0:c144,c208</label>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c144,c208</imagelabel>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </seclabel>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <label>+107:+107</label>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <imagelabel>+107:+107</imagelabel>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </seclabel>
Oct  8 06:16:34 np0005475493 nova_compute[262220]: </domain>
Oct  8 06:16:34 np0005475493 nova_compute[262220]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.798 2 DEBUG nova.virt.libvirt.guest [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:40:4d:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap79d28498-fe"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.801 2 DEBUG nova.virt.libvirt.guest [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:40:4d:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap79d28498-fe"/></interface>not found in domain: <domain type='kvm' id='2'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <name>instance-00000006</name>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <uuid>ea469a2e-bf09-495c-9b5e-02ad38d32d40</uuid>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <metadata>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <nova:name>tempest-TestNetworkBasicOps-server-1473882269</nova:name>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <nova:creationTime>2025-10-08 10:16:31</nova:creationTime>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <nova:flavor name="m1.nano">
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:memory>128</nova:memory>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:disk>1</nova:disk>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:swap>0</nova:swap>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:ephemeral>0</nova:ephemeral>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:vcpus>1</nova:vcpus>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </nova:flavor>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <nova:owner>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:user uuid="d50b19166a7245e390a6e29682191263">tempest-TestNetworkBasicOps-139500885-project-member</nova:user>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:project uuid="0edb2c85b88b4b168a4b2e8c5ed4a05c">tempest-TestNetworkBasicOps-139500885</nova:project>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </nova:owner>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <nova:root type="image" uuid="e5994bac-385d-4cfe-962e-386aa0559983"/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <nova:ports>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:port uuid="be4ec274-2a90-48e8-bd51-fd01f3c659da">
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </nova:port>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </nova:ports>
Oct  8 06:16:34 np0005475493 nova_compute[262220]: </nova:instance>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </metadata>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <memory unit='KiB'>131072</memory>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <vcpu placement='static'>1</vcpu>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <resource>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <partition>/machine</partition>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </resource>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <sysinfo type='smbios'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <system>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <entry name='manufacturer'>RDO</entry>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <entry name='product'>OpenStack Compute</entry>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <entry name='serial'>ea469a2e-bf09-495c-9b5e-02ad38d32d40</entry>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <entry name='uuid'>ea469a2e-bf09-495c-9b5e-02ad38d32d40</entry>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <entry name='family'>Virtual Machine</entry>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </system>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </sysinfo>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <os>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <boot dev='hd'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <smbios mode='sysinfo'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </os>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <features>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <acpi/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <apic/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <vmcoreinfo state='on'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </features>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <cpu mode='custom' match='exact' check='full'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <vendor>AMD</vendor>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='x2apic'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='tsc-deadline'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='hypervisor'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='tsc_adjust'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='spec-ctrl'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='stibp'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='arch-capabilities'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='ssbd'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='cmp_legacy'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='overflow-recov'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='succor'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='ibrs'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='amd-ssbd'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='virt-ssbd'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='disable' name='lbrv'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='disable' name='tsc-scale'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='disable' name='vmcb-clean'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='disable' name='flushbyasid'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='disable' name='pause-filter'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='disable' name='pfthreshold'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='rdctl-no'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='mds-no'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='pschange-mc-no'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='gds-no'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='rfds-no'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='disable' name='xsaves'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='disable' name='svm'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='require' name='topoext'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='disable' name='npt'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <feature policy='disable' name='nrip-save'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </cpu>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <clock offset='utc'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <timer name='pit' tickpolicy='delay'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <timer name='hpet' present='no'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </clock>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <on_poweroff>destroy</on_poweroff>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <on_reboot>restart</on_reboot>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <on_crash>destroy</on_crash>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <devices>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <disk type='network' device='disk'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <driver name='qemu' type='raw' cache='none'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <auth username='openstack'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:        <secret type='ceph' uuid='787292cc-8154-50c4-9e00-e9be3e817149'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      </auth>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <source protocol='rbd' name='vms/ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk' index='2'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:        <host name='192.168.122.100' port='6789'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:        <host name='192.168.122.102' port='6789'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:        <host name='192.168.122.101' port='6789'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      </source>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target dev='vda' bus='virtio'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='virtio-disk0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </disk>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <disk type='network' device='cdrom'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <driver name='qemu' type='raw' cache='none'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <auth username='openstack'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:        <secret type='ceph' uuid='787292cc-8154-50c4-9e00-e9be3e817149'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      </auth>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <source protocol='rbd' name='vms/ea469a2e-bf09-495c-9b5e-02ad38d32d40_disk.config' index='1'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:        <host name='192.168.122.100' port='6789'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:        <host name='192.168.122.102' port='6789'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:        <host name='192.168.122.101' port='6789'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      </source>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target dev='sda' bus='sata'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <readonly/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='sata0-0-0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </disk>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='0' model='pcie-root'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pcie.0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='1' port='0x10'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.1'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='2' port='0x11'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.2'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='3' port='0x12'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.3'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='4' port='0x13'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.4'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='5' port='0x14'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.5'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='6' port='0x15'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.6'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='7' port='0x16'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.7'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='8' port='0x17'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.8'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='9' port='0x18'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.9'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='10' port='0x19'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.10'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='11' port='0x1a'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.11'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='12' port='0x1b'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.12'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='13' port='0x1c'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.13'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='14' port='0x1d'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.14'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='15' port='0x1e'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.15'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='16' port='0x1f'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.16'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='17' port='0x20'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.17'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='18' port='0x21'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.18'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='19' port='0x22'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.19'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='20' port='0x23'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.20'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='21' port='0x24'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.21'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='22' port='0x25'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.22'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='23' port='0x26'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.23'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='24' port='0x27'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.24'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-root-port'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target chassis='25' port='0x28'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.25'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model name='pcie-pci-bridge'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='pci.26'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='usb'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <controller type='sata' index='0'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='ide'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </controller>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <interface type='ethernet'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <mac address='fa:16:3e:e6:b0:e0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target dev='tapbe4ec274-2a'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model type='virtio'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <driver name='vhost' rx_queue_size='512'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <mtu size='1442'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='net0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </interface>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <serial type='pty'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <source path='/dev/pts/0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <log file='/var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/console.log' append='off'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target type='isa-serial' port='0'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:        <model name='isa-serial'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      </target>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='serial0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </serial>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <console type='pty' tty='/dev/pts/0'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <source path='/dev/pts/0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <log file='/var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40/console.log' append='off'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <target type='serial' port='0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='serial0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </console>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <input type='tablet' bus='usb'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='input0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='usb' bus='0' port='1'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </input>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <input type='mouse' bus='ps2'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='input1'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </input>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <input type='keyboard' bus='ps2'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='input2'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </input>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <listen type='address' address='::0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </graphics>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <audio id='1' type='none'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <video>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <model type='virtio' heads='1' primary='yes'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='video0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </video>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <watchdog model='itco' action='reset'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='watchdog0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </watchdog>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <memballoon model='virtio'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <stats period='10'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='balloon0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </memballoon>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <rng model='virtio'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <backend model='random'>/dev/urandom</backend>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <alias name='rng0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </rng>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </devices>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <label>system_u:system_r:svirt_t:s0:c144,c208</label>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c144,c208</imagelabel>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </seclabel>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <label>+107:+107</label>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <imagelabel>+107:+107</imagelabel>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </seclabel>
Oct  8 06:16:34 np0005475493 nova_compute[262220]: </domain>
Oct  8 06:16:34 np0005475493 nova_compute[262220]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.803 2 WARNING nova.virt.libvirt.driver [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Detaching interface fa:16:3e:40:4d:66 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap79d28498-fe' not found.#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.804 2 DEBUG nova.virt.libvirt.vif [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-08T10:14:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1473882269',display_name='tempest-TestNetworkBasicOps-server-1473882269',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1473882269',id=6,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLR6MyJBOMYp2wYGaL6G2mEbJcrZztqNeLLTecSXpMa+10UdVkrFMZxX+qiOC0ccFZIJleCiHIYc6JXNFg7vRmgJ0JtTU6W+KAYc8u1JxRO51IwGJ30ByO68fx1sTbOqEg==',key_name='tempest-TestNetworkBasicOps-1715641436',keypairs=<?>,launch_index=0,launched_at=2025-10-08T10:14:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-b2owsx2f',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T10:14:35Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=ea469a2e-bf09-495c-9b5e-02ad38d32d40,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.805 2 DEBUG nova.network.os_vif_util [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Converting VIF {"id": "79d28498-fe9d-49dc-ad2c-bde432b239db", "address": "fa:16:3e:40:4d:66", "network": {"id": "0a28a475-c59d-4526-93af-b8af40052e5c", "bridge": "br-int", "label": "tempest-network-smoke--1745814783", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d28498-fe", "ovs_interfaceid": "79d28498-fe9d-49dc-ad2c-bde432b239db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.806 2 DEBUG nova.network.os_vif_util [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:4d:66,bridge_name='br-int',has_traffic_filtering=True,id=79d28498-fe9d-49dc-ad2c-bde432b239db,network=Network(0a28a475-c59d-4526-93af-b8af40052e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d28498-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.806 2 DEBUG os_vif [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:4d:66,bridge_name='br-int',has_traffic_filtering=True,id=79d28498-fe9d-49dc-ad2c-bde432b239db,network=Network(0a28a475-c59d-4526-93af-b8af40052e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d28498-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.807 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79d28498-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.808 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.810 2 INFO os_vif [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:4d:66,bridge_name='br-int',has_traffic_filtering=True,id=79d28498-fe9d-49dc-ad2c-bde432b239db,network=Network(0a28a475-c59d-4526-93af-b8af40052e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d28498-fe')#033[00m
Oct  8 06:16:34 np0005475493 nova_compute[262220]: 2025-10-08 10:16:34.810 2 DEBUG nova.virt.libvirt.guest [req-a1aadc2d-190e-4eb8-8971-e56567d153df req-4eb46238-2d8e-488b-aaef-6342045264a0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <nova:name>tempest-TestNetworkBasicOps-server-1473882269</nova:name>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <nova:creationTime>2025-10-08 10:16:34</nova:creationTime>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <nova:flavor name="m1.nano">
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:memory>128</nova:memory>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:disk>1</nova:disk>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:swap>0</nova:swap>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:ephemeral>0</nova:ephemeral>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:vcpus>1</nova:vcpus>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </nova:flavor>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <nova:owner>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:user uuid="d50b19166a7245e390a6e29682191263">tempest-TestNetworkBasicOps-139500885-project-member</nova:user>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:project uuid="0edb2c85b88b4b168a4b2e8c5ed4a05c">tempest-TestNetworkBasicOps-139500885</nova:project>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </nova:owner>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <nova:root type="image" uuid="e5994bac-385d-4cfe-962e-386aa0559983"/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  <nova:ports>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    <nova:port uuid="be4ec274-2a90-48e8-bd51-fd01f3c659da">
Oct  8 06:16:34 np0005475493 nova_compute[262220]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:    </nova:port>
Oct  8 06:16:34 np0005475493 nova_compute[262220]:  </nova:ports>
Oct  8 06:16:34 np0005475493 nova_compute[262220]: </nova:instance>
Oct  8 06:16:34 np0005475493 nova_compute[262220]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  8 06:16:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:34.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:35 np0005475493 podman[277859]: 2025-10-08 10:16:35.080348495 +0000 UTC m=+0.059102975 container create a26ada7dc500c7c321e0b13a970f0141d944b972a1ecac53eaea5b5b2e1f2436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jemison, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:16:35 np0005475493 systemd[1]: Started libpod-conmon-a26ada7dc500c7c321e0b13a970f0141d944b972a1ecac53eaea5b5b2e1f2436.scope.
Oct  8 06:16:35 np0005475493 podman[277859]: 2025-10-08 10:16:35.041628878 +0000 UTC m=+0.020383388 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:16:35 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:16:35 np0005475493 podman[277859]: 2025-10-08 10:16:35.159217896 +0000 UTC m=+0.137972396 container init a26ada7dc500c7c321e0b13a970f0141d944b972a1ecac53eaea5b5b2e1f2436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jemison, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:16:35 np0005475493 podman[277859]: 2025-10-08 10:16:35.166062556 +0000 UTC m=+0.144817036 container start a26ada7dc500c7c321e0b13a970f0141d944b972a1ecac53eaea5b5b2e1f2436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jemison, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:16:35 np0005475493 podman[277859]: 2025-10-08 10:16:35.16928001 +0000 UTC m=+0.148034490 container attach a26ada7dc500c7c321e0b13a970f0141d944b972a1ecac53eaea5b5b2e1f2436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jemison, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:16:35 np0005475493 distracted_jemison[277875]: 167 167
Oct  8 06:16:35 np0005475493 systemd[1]: libpod-a26ada7dc500c7c321e0b13a970f0141d944b972a1ecac53eaea5b5b2e1f2436.scope: Deactivated successfully.
Oct  8 06:16:35 np0005475493 podman[277859]: 2025-10-08 10:16:35.1714586 +0000 UTC m=+0.150213080 container died a26ada7dc500c7c321e0b13a970f0141d944b972a1ecac53eaea5b5b2e1f2436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jemison, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:16:35 np0005475493 systemd[1]: var-lib-containers-storage-overlay-e06d261bdd26ea7582cbc0dd05b494af78f2c33044d74c1b52c3545e9bf621e3-merged.mount: Deactivated successfully.
Oct  8 06:16:35 np0005475493 podman[277859]: 2025-10-08 10:16:35.206756037 +0000 UTC m=+0.185510507 container remove a26ada7dc500c7c321e0b13a970f0141d944b972a1ecac53eaea5b5b2e1f2436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jemison, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  8 06:16:35 np0005475493 systemd[1]: libpod-conmon-a26ada7dc500c7c321e0b13a970f0141d944b972a1ecac53eaea5b5b2e1f2436.scope: Deactivated successfully.
Oct  8 06:16:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:35 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_48] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:35 np0005475493 podman[277900]: 2025-10-08 10:16:35.400450576 +0000 UTC m=+0.071893656 container create 326d505055daf726e00ca3600d17aefe9ebeb5bc427d6aa4631d203aba271ff3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_lehmann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  8 06:16:35 np0005475493 podman[277900]: 2025-10-08 10:16:35.353934998 +0000 UTC m=+0.025378088 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:16:35 np0005475493 systemd[1]: Started libpod-conmon-326d505055daf726e00ca3600d17aefe9ebeb5bc427d6aa4631d203aba271ff3.scope.
Oct  8 06:16:35 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:16:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ac354ade5a51f651f3fa6eaa8ac26157769b8221044f30b8aa0c2277f1b49c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:16:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ac354ade5a51f651f3fa6eaa8ac26157769b8221044f30b8aa0c2277f1b49c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:16:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ac354ade5a51f651f3fa6eaa8ac26157769b8221044f30b8aa0c2277f1b49c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:16:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ac354ade5a51f651f3fa6eaa8ac26157769b8221044f30b8aa0c2277f1b49c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:16:35 np0005475493 podman[277900]: 2025-10-08 10:16:35.517833248 +0000 UTC m=+0.189276358 container init 326d505055daf726e00ca3600d17aefe9ebeb5bc427d6aa4631d203aba271ff3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_lehmann, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:16:35 np0005475493 podman[277900]: 2025-10-08 10:16:35.525888168 +0000 UTC m=+0.197331248 container start 326d505055daf726e00ca3600d17aefe9ebeb5bc427d6aa4631d203aba271ff3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_lehmann, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:16:35 np0005475493 podman[277900]: 2025-10-08 10:16:35.529062519 +0000 UTC m=+0.200505629 container attach 326d505055daf726e00ca3600d17aefe9ebeb5bc427d6aa4631d203aba271ff3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  8 06:16:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:16:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:35.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:16:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:16:35] "GET /metrics HTTP/1.1" 200 48469 "" "Prometheus/2.51.0"
Oct  8 06:16:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:16:35] "GET /metrics HTTP/1.1" 200 48469 "" "Prometheus/2.51.0"
Oct  8 06:16:35 np0005475493 ovn_controller[153187]: 2025-10-08T10:16:35Z|00053|binding|INFO|Releasing lport f613d263-6ad2-4e23-84bc-b066c6b6b34a from this chassis (sb_readonly=0)
Oct  8 06:16:35 np0005475493 nova_compute[262220]: 2025-10-08 10:16:35.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]: {
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:    "1": [
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:        {
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:            "devices": [
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:                "/dev/loop3"
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:            ],
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:            "lv_name": "ceph_lv0",
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:            "lv_size": "21470642176",
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:            "name": "ceph_lv0",
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:            "tags": {
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:                "ceph.cluster_name": "ceph",
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:                "ceph.crush_device_class": "",
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:                "ceph.encrypted": "0",
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:                "ceph.osd_id": "1",
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:                "ceph.type": "block",
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:                "ceph.vdo": "0",
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:                "ceph.with_tpm": "0"
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:            },
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:            "type": "block",
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:            "vg_name": "ceph_vg0"
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:        }
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]:    ]
Oct  8 06:16:35 np0005475493 dreamy_lehmann[277916]: }
Oct  8 06:16:35 np0005475493 systemd[1]: libpod-326d505055daf726e00ca3600d17aefe9ebeb5bc427d6aa4631d203aba271ff3.scope: Deactivated successfully.
Oct  8 06:16:35 np0005475493 podman[277900]: 2025-10-08 10:16:35.882408052 +0000 UTC m=+0.553851132 container died 326d505055daf726e00ca3600d17aefe9ebeb5bc427d6aa4631d203aba271ff3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:16:35 np0005475493 systemd[1]: var-lib-containers-storage-overlay-12ac354ade5a51f651f3fa6eaa8ac26157769b8221044f30b8aa0c2277f1b49c-merged.mount: Deactivated successfully.
Oct  8 06:16:35 np0005475493 podman[277900]: 2025-10-08 10:16:35.930169441 +0000 UTC m=+0.601612521 container remove 326d505055daf726e00ca3600d17aefe9ebeb5bc427d6aa4631d203aba271ff3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_lehmann, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:16:35 np0005475493 systemd[1]: libpod-conmon-326d505055daf726e00ca3600d17aefe9ebeb5bc427d6aa4631d203aba271ff3.scope: Deactivated successfully.
Oct  8 06:16:36 np0005475493 nova_compute[262220]: 2025-10-08 10:16:36.033 2 INFO nova.network.neutron [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Port 79d28498-fe9d-49dc-ad2c-bde432b239db from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct  8 06:16:36 np0005475493 nova_compute[262220]: 2025-10-08 10:16:36.034 2 DEBUG nova.network.neutron [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:16:36 np0005475493 nova_compute[262220]: 2025-10-08 10:16:36.070 2 DEBUG oslo_concurrency.lockutils [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Releasing lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:16:36 np0005475493 nova_compute[262220]: 2025-10-08 10:16:36.095 2 DEBUG oslo_concurrency.lockutils [None req-5740ff17-4652-44c3-8c4a-ffb46016cf22 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "interface-ea469a2e-bf09-495c-9b5e-02ad38d32d40-79d28498-fe9d-49dc-ad2c-bde432b239db" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:16:36 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v960: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.9 KiB/s wr, 28 op/s
Oct  8 06:16:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c0047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:36 np0005475493 podman[278032]: 2025-10-08 10:16:36.530749819 +0000 UTC m=+0.048690460 container create 34fe3732289c69f49498934497ba552dc5fdaa9f81e5b3b24353850118066cb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_lewin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  8 06:16:36 np0005475493 systemd[1]: Started libpod-conmon-34fe3732289c69f49498934497ba552dc5fdaa9f81e5b3b24353850118066cb9.scope.
Oct  8 06:16:36 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:16:36 np0005475493 podman[278032]: 2025-10-08 10:16:36.508294995 +0000 UTC m=+0.026235656 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:16:36 np0005475493 podman[278032]: 2025-10-08 10:16:36.613411461 +0000 UTC m=+0.131352132 container init 34fe3732289c69f49498934497ba552dc5fdaa9f81e5b3b24353850118066cb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_lewin, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct  8 06:16:36 np0005475493 podman[278032]: 2025-10-08 10:16:36.621117229 +0000 UTC m=+0.139057870 container start 34fe3732289c69f49498934497ba552dc5fdaa9f81e5b3b24353850118066cb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  8 06:16:36 np0005475493 podman[278032]: 2025-10-08 10:16:36.624313032 +0000 UTC m=+0.142253693 container attach 34fe3732289c69f49498934497ba552dc5fdaa9f81e5b3b24353850118066cb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:16:36 np0005475493 peaceful_lewin[278049]: 167 167
Oct  8 06:16:36 np0005475493 systemd[1]: libpod-34fe3732289c69f49498934497ba552dc5fdaa9f81e5b3b24353850118066cb9.scope: Deactivated successfully.
Oct  8 06:16:36 np0005475493 podman[278032]: 2025-10-08 10:16:36.626449031 +0000 UTC m=+0.144389672 container died 34fe3732289c69f49498934497ba552dc5fdaa9f81e5b3b24353850118066cb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:16:36 np0005475493 systemd[1]: var-lib-containers-storage-overlay-15f62404d77d12f04baf89dac6d54f39b48bfa0e8a75b2aabf18b33058699fb5-merged.mount: Deactivated successfully.
Oct  8 06:16:36 np0005475493 podman[278032]: 2025-10-08 10:16:36.668078503 +0000 UTC m=+0.186019144 container remove 34fe3732289c69f49498934497ba552dc5fdaa9f81e5b3b24353850118066cb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_lewin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct  8 06:16:36 np0005475493 systemd[1]: libpod-conmon-34fe3732289c69f49498934497ba552dc5fdaa9f81e5b3b24353850118066cb9.scope: Deactivated successfully.
Oct  8 06:16:36 np0005475493 nova_compute[262220]: 2025-10-08 10:16:36.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:36 np0005475493 nova_compute[262220]: 2025-10-08 10:16:36.769 2 DEBUG nova.compute.manager [req-83553102-41f1-4c9b-b2d0-c5b376ba553b req-d1c88407-5c65-4555-90c2-4a1cee97cef0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-changed-be4ec274-2a90-48e8-bd51-fd01f3c659da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:16:36 np0005475493 nova_compute[262220]: 2025-10-08 10:16:36.769 2 DEBUG nova.compute.manager [req-83553102-41f1-4c9b-b2d0-c5b376ba553b req-d1c88407-5c65-4555-90c2-4a1cee97cef0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Refreshing instance network info cache due to event network-changed-be4ec274-2a90-48e8-bd51-fd01f3c659da. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  8 06:16:36 np0005475493 nova_compute[262220]: 2025-10-08 10:16:36.769 2 DEBUG oslo_concurrency.lockutils [req-83553102-41f1-4c9b-b2d0-c5b376ba553b req-d1c88407-5c65-4555-90c2-4a1cee97cef0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:16:36 np0005475493 nova_compute[262220]: 2025-10-08 10:16:36.769 2 DEBUG oslo_concurrency.lockutils [req-83553102-41f1-4c9b-b2d0-c5b376ba553b req-d1c88407-5c65-4555-90c2-4a1cee97cef0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:16:36 np0005475493 nova_compute[262220]: 2025-10-08 10:16:36.769 2 DEBUG nova.network.neutron [req-83553102-41f1-4c9b-b2d0-c5b376ba553b req-d1c88407-5c65-4555-90c2-4a1cee97cef0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Refreshing network info cache for port be4ec274-2a90-48e8-bd51-fd01f3c659da _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  8 06:16:36 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:36 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac780031c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:36 np0005475493 podman[278073]: 2025-10-08 10:16:36.836801107 +0000 UTC m=+0.041472817 container create a0409f557dfd61ffda492937d279acfa5bf3c7a21fd612de155a528b3f759a0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  8 06:16:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:36.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:36 np0005475493 systemd[1]: Started libpod-conmon-a0409f557dfd61ffda492937d279acfa5bf3c7a21fd612de155a528b3f759a0a.scope.
Oct  8 06:16:36 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:16:36 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca6d8df09b7ce16543a93f705c57ac8819de140be41e51546844e71bc226ea92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:16:36 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca6d8df09b7ce16543a93f705c57ac8819de140be41e51546844e71bc226ea92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:16:36 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca6d8df09b7ce16543a93f705c57ac8819de140be41e51546844e71bc226ea92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:16:36 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca6d8df09b7ce16543a93f705c57ac8819de140be41e51546844e71bc226ea92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:16:36 np0005475493 podman[278073]: 2025-10-08 10:16:36.821333439 +0000 UTC m=+0.026005179 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.015 2 DEBUG oslo_concurrency.lockutils [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.015 2 DEBUG oslo_concurrency.lockutils [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.015 2 DEBUG oslo_concurrency.lockutils [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.015 2 DEBUG oslo_concurrency.lockutils [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.016 2 DEBUG oslo_concurrency.lockutils [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.017 2 INFO nova.compute.manager [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Terminating instance#033[00m
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.017 2 DEBUG nova.compute.manager [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  8 06:16:37 np0005475493 podman[278073]: 2025-10-08 10:16:37.02093101 +0000 UTC m=+0.225602780 container init a0409f557dfd61ffda492937d279acfa5bf3c7a21fd612de155a528b3f759a0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True)
Oct  8 06:16:37 np0005475493 podman[278073]: 2025-10-08 10:16:37.02965225 +0000 UTC m=+0.234323960 container start a0409f557dfd61ffda492937d279acfa5bf3c7a21fd612de155a528b3f759a0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_mclaren, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  8 06:16:37 np0005475493 podman[278073]: 2025-10-08 10:16:37.03275379 +0000 UTC m=+0.237425600 container attach a0409f557dfd61ffda492937d279acfa5bf3c7a21fd612de155a528b3f759a0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_mclaren, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct  8 06:16:37 np0005475493 kernel: tapbe4ec274-2a (unregistering): left promiscuous mode
Oct  8 06:16:37 np0005475493 NetworkManager[44872]: <info>  [1759918597.0751] device (tapbe4ec274-2a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  8 06:16:37 np0005475493 ovn_controller[153187]: 2025-10-08T10:16:37Z|00054|binding|INFO|Releasing lport be4ec274-2a90-48e8-bd51-fd01f3c659da from this chassis (sb_readonly=0)
Oct  8 06:16:37 np0005475493 ovn_controller[153187]: 2025-10-08T10:16:37Z|00055|binding|INFO|Setting lport be4ec274-2a90-48e8-bd51-fd01f3c659da down in Southbound
Oct  8 06:16:37 np0005475493 ovn_controller[153187]: 2025-10-08T10:16:37Z|00056|binding|INFO|Removing iface tapbe4ec274-2a ovn-installed in OVS
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.142 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:b0:e0 10.100.0.3'], port_security=['fa:16:3e:e6:b0:e0 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ea469a2e-bf09-495c-9b5e-02ad38d32d40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-834a886f-bb33-49ed-b47e-ef0308a38e89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '13817d67-6af8-4060-9f0c-16a7fd8532c0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2eaf1a8f-1880-48d7-9974-4c1e9169efe5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], logical_port=be4ec274-2a90-48e8-bd51-fd01f3c659da) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  8 06:16:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.143 163175 INFO neutron.agent.ovn.metadata.agent [-] Port be4ec274-2a90-48e8-bd51-fd01f3c659da in datapath 834a886f-bb33-49ed-b47e-ef0308a38e89 unbound from our chassis#033[00m
Oct  8 06:16:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.144 163175 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 834a886f-bb33-49ed-b47e-ef0308a38e89, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  8 06:16:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.145 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[cd2ce43d-b577-40d2-a800-61ed62442c85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:16:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.146 163175 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89 namespace which is not needed anymore#033[00m
Oct  8 06:16:37 np0005475493 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct  8 06:16:37 np0005475493 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000006.scope: Consumed 19.101s CPU time.
Oct  8 06:16:37 np0005475493 systemd-machined[216030]: Machine qemu-2-instance-00000006 terminated.
Oct  8 06:16:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:16:37.167Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.264 2 INFO nova.virt.libvirt.driver [-] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Instance destroyed successfully.#033[00m
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.266 2 DEBUG nova.objects.instance [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'resources' on Instance uuid ea469a2e-bf09-495c-9b5e-02ad38d32d40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  8 06:16:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:37 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_47] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:37 np0005475493 neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89[274443]: [NOTICE]   (274447) : haproxy version is 2.8.14-c23fe91
Oct  8 06:16:37 np0005475493 neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89[274443]: [NOTICE]   (274447) : path to executable is /usr/sbin/haproxy
Oct  8 06:16:37 np0005475493 neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89[274443]: [WARNING]  (274447) : Exiting Master process...
Oct  8 06:16:37 np0005475493 neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89[274443]: [ALERT]    (274447) : Current worker (274449) exited with code 143 (Terminated)
Oct  8 06:16:37 np0005475493 neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89[274443]: [WARNING]  (274447) : All workers exited. Exiting... (0)
Oct  8 06:16:37 np0005475493 systemd[1]: libpod-2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9.scope: Deactivated successfully.
Oct  8 06:16:37 np0005475493 podman[278126]: 2025-10-08 10:16:37.290448881 +0000 UTC m=+0.052516692 container died 2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  8 06:16:37 np0005475493 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9-userdata-shm.mount: Deactivated successfully.
Oct  8 06:16:37 np0005475493 systemd[1]: var-lib-containers-storage-overlay-841b76c2441b0eb7f658de0d9799efa6ab00baf820e9b70f7311256c5c904ae8-merged.mount: Deactivated successfully.
Oct  8 06:16:37 np0005475493 podman[278126]: 2025-10-08 10:16:37.328818898 +0000 UTC m=+0.090886699 container cleanup 2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.340 2 DEBUG nova.virt.libvirt.vif [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-08T10:14:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1473882269',display_name='tempest-TestNetworkBasicOps-server-1473882269',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1473882269',id=6,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLR6MyJBOMYp2wYGaL6G2mEbJcrZztqNeLLTecSXpMa+10UdVkrFMZxX+qiOC0ccFZIJleCiHIYc6JXNFg7vRmgJ0JtTU6W+KAYc8u1JxRO51IwGJ30ByO68fx1sTbOqEg==',key_name='tempest-TestNetworkBasicOps-1715641436',keypairs=<?>,launch_index=0,launched_at=2025-10-08T10:14:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-b2owsx2f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T10:14:35Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=ea469a2e-bf09-495c-9b5e-02ad38d32d40,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  8 06:16:37 np0005475493 systemd[1]: libpod-conmon-2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9.scope: Deactivated successfully.
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.342 2 DEBUG nova.network.os_vif_util [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.343 2 DEBUG nova.network.os_vif_util [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:b0:e0,bridge_name='br-int',has_traffic_filtering=True,id=be4ec274-2a90-48e8-bd51-fd01f3c659da,network=Network(834a886f-bb33-49ed-b47e-ef0308a38e89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe4ec274-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.343 2 DEBUG os_vif [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:b0:e0,bridge_name='br-int',has_traffic_filtering=True,id=be4ec274-2a90-48e8-bd51-fd01f3c659da,network=Network(834a886f-bb33-49ed-b47e-ef0308a38e89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe4ec274-2a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.345 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe4ec274-2a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.351 2 INFO os_vif [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:b0:e0,bridge_name='br-int',has_traffic_filtering=True,id=be4ec274-2a90-48e8-bd51-fd01f3c659da,network=Network(834a886f-bb33-49ed-b47e-ef0308a38e89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe4ec274-2a')#033[00m
Oct  8 06:16:37 np0005475493 podman[278177]: 2025-10-08 10:16:37.402073157 +0000 UTC m=+0.049893608 container remove 2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct  8 06:16:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.410 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[74326e86-6f5d-4c6d-92c2-fa9a3bae5279]: (4, ('Wed Oct  8 10:16:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89 (2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9)\n2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9\nWed Oct  8 10:16:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89 (2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9)\n2c85508ebe5bbfbd488053b850f882989cf060afde04acce5a32fa0a4320dce9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:16:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.412 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[ca343fa8-cd8f-4685-8363-3fea3b738625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:16:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.413 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap834a886f-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:37 np0005475493 kernel: tap834a886f-b0: left promiscuous mode
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.425 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[9c12be10-57de-4b57-83eb-193d3f36ec0a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.431 2 DEBUG nova.compute.manager [req-42590b41-e40f-42b7-9348-c288e35a8e89 req-ac73dbe6-1615-4f23-af57-a374f2b0a6ec 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-vif-unplugged-be4ec274-2a90-48e8-bd51-fd01f3c659da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.433 2 DEBUG oslo_concurrency.lockutils [req-42590b41-e40f-42b7-9348-c288e35a8e89 req-ac73dbe6-1615-4f23-af57-a374f2b0a6ec 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.434 2 DEBUG oslo_concurrency.lockutils [req-42590b41-e40f-42b7-9348-c288e35a8e89 req-ac73dbe6-1615-4f23-af57-a374f2b0a6ec 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.434 2 DEBUG oslo_concurrency.lockutils [req-42590b41-e40f-42b7-9348-c288e35a8e89 req-ac73dbe6-1615-4f23-af57-a374f2b0a6ec 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.435 2 DEBUG nova.compute.manager [req-42590b41-e40f-42b7-9348-c288e35a8e89 req-ac73dbe6-1615-4f23-af57-a374f2b0a6ec 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] No waiting events found dispatching network-vif-unplugged-be4ec274-2a90-48e8-bd51-fd01f3c659da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.436 2 DEBUG nova.compute.manager [req-42590b41-e40f-42b7-9348-c288e35a8e89 req-ac73dbe6-1615-4f23-af57-a374f2b0a6ec 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-vif-unplugged-be4ec274-2a90-48e8-bd51-fd01f3c659da for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.462 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[b6f48bd9-a239-4d2c-b4ba-ce91ab509b3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:16:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.463 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b00854-daf8-4c26-9bed-13fcb4cd486d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:16:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.480 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[5ecbb59a-24a4-4ba6-bae6-f675a577afb0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443280, 'reachable_time': 17227, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278234, 'error': None, 'target': 'ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:16:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.483 163290 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-834a886f-bb33-49ed-b47e-ef0308a38e89 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  8 06:16:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:37.483 163290 DEBUG oslo.privsep.daemon [-] privsep: reply[e65d050a-14b6-4cdb-baea-23076ec0532b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:16:37 np0005475493 systemd[1]: run-netns-ovnmeta\x2d834a886f\x2dbb33\x2d49ed\x2db47e\x2def0308a38e89.mount: Deactivated successfully.
Oct  8 06:16:37 np0005475493 lvm[278260]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:16:37 np0005475493 lvm[278260]: VG ceph_vg0 finished
Oct  8 06:16:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:37.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:37 np0005475493 keen_mclaren[278090]: {}
Oct  8 06:16:37 np0005475493 systemd[1]: libpod-a0409f557dfd61ffda492937d279acfa5bf3c7a21fd612de155a528b3f759a0a.scope: Deactivated successfully.
Oct  8 06:16:37 np0005475493 systemd[1]: libpod-a0409f557dfd61ffda492937d279acfa5bf3c7a21fd612de155a528b3f759a0a.scope: Consumed 1.206s CPU time.
Oct  8 06:16:37 np0005475493 podman[278073]: 2025-10-08 10:16:37.745574373 +0000 UTC m=+0.950246093 container died a0409f557dfd61ffda492937d279acfa5bf3c7a21fd612de155a528b3f759a0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_mclaren, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  8 06:16:37 np0005475493 systemd[1]: var-lib-containers-storage-overlay-ca6d8df09b7ce16543a93f705c57ac8819de140be41e51546844e71bc226ea92-merged.mount: Deactivated successfully.
Oct  8 06:16:37 np0005475493 podman[278073]: 2025-10-08 10:16:37.797199016 +0000 UTC m=+1.001870746 container remove a0409f557dfd61ffda492937d279acfa5bf3c7a21fd612de155a528b3f759a0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_mclaren, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  8 06:16:37 np0005475493 systemd[1]: libpod-conmon-a0409f557dfd61ffda492937d279acfa5bf3c7a21fd612de155a528b3f759a0a.scope: Deactivated successfully.
Oct  8 06:16:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:16:37 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:16:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.862 2 INFO nova.virt.libvirt.driver [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Deleting instance files /var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40_del#033[00m
Oct  8 06:16:37 np0005475493 nova_compute[262220]: 2025-10-08 10:16:37.864 2 INFO nova.virt.libvirt.driver [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Deletion of /var/lib/nova/instances/ea469a2e-bf09-495c-9b5e-02ad38d32d40_del complete#033[00m
Oct  8 06:16:37 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:16:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:37 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:16:38 np0005475493 nova_compute[262220]: 2025-10-08 10:16:38.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:38 np0005475493 nova_compute[262220]: 2025-10-08 10:16:38.103 2 INFO nova.compute.manager [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Took 1.08 seconds to destroy the instance on the hypervisor.#033[00m
Oct  8 06:16:38 np0005475493 nova_compute[262220]: 2025-10-08 10:16:38.103 2 DEBUG oslo.service.loopingcall [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  8 06:16:38 np0005475493 nova_compute[262220]: 2025-10-08 10:16:38.104 2 DEBUG nova.compute.manager [-] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  8 06:16:38 np0005475493 nova_compute[262220]: 2025-10-08 10:16:38.104 2 DEBUG nova.network.neutron [-] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  8 06:16:38 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v961: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.9 KiB/s wr, 28 op/s
Oct  8 06:16:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_48] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:38 np0005475493 nova_compute[262220]: 2025-10-08 10:16:38.466 2 DEBUG nova.network.neutron [req-83553102-41f1-4c9b-b2d0-c5b376ba553b req-d1c88407-5c65-4555-90c2-4a1cee97cef0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updated VIF entry in instance network info cache for port be4ec274-2a90-48e8-bd51-fd01f3c659da. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  8 06:16:38 np0005475493 nova_compute[262220]: 2025-10-08 10:16:38.466 2 DEBUG nova.network.neutron [req-83553102-41f1-4c9b-b2d0-c5b376ba553b req-d1c88407-5c65-4555-90c2-4a1cee97cef0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [{"id": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "address": "fa:16:3e:e6:b0:e0", "network": {"id": "834a886f-bb33-49ed-b47e-ef0308a38e89", "bridge": "br-int", "label": "tempest-network-smoke--1879322246", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe4ec274-2a", "ovs_interfaceid": "be4ec274-2a90-48e8-bd51-fd01f3c659da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:16:38 np0005475493 nova_compute[262220]: 2025-10-08 10:16:38.610 2 DEBUG oslo_concurrency.lockutils [req-83553102-41f1-4c9b-b2d0-c5b376ba553b req-d1c88407-5c65-4555-90c2-4a1cee97cef0 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-ea469a2e-bf09-495c-9b5e-02ad38d32d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:16:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:38 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c004800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:38 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:16:38 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:16:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:38.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:38 np0005475493 nova_compute[262220]: 2025-10-08 10:16:38.969 2 DEBUG nova.network.neutron [-] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:16:38 np0005475493 nova_compute[262220]: 2025-10-08 10:16:38.972 2 DEBUG nova.compute.manager [req-7167e09e-8b78-4752-8e62-dc358ce87d6b req-0b6eccab-71fe-405f-b78a-a804c8c8e9d8 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-vif-deleted-be4ec274-2a90-48e8-bd51-fd01f3c659da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:16:38 np0005475493 nova_compute[262220]: 2025-10-08 10:16:38.973 2 INFO nova.compute.manager [req-7167e09e-8b78-4752-8e62-dc358ce87d6b req-0b6eccab-71fe-405f-b78a-a804c8c8e9d8 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Neutron deleted interface be4ec274-2a90-48e8-bd51-fd01f3c659da; detaching it from the instance and deleting it from the info cache#033[00m
Oct  8 06:16:38 np0005475493 nova_compute[262220]: 2025-10-08 10:16:38.973 2 DEBUG nova.network.neutron [req-7167e09e-8b78-4752-8e62-dc358ce87d6b req-0b6eccab-71fe-405f-b78a-a804c8c8e9d8 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:16:39 np0005475493 nova_compute[262220]: 2025-10-08 10:16:39.129 2 INFO nova.compute.manager [-] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Took 1.02 seconds to deallocate network for instance.#033[00m
Oct  8 06:16:39 np0005475493 nova_compute[262220]: 2025-10-08 10:16:39.134 2 DEBUG nova.compute.manager [req-7167e09e-8b78-4752-8e62-dc358ce87d6b req-0b6eccab-71fe-405f-b78a-a804c8c8e9d8 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Detach interface failed, port_id=be4ec274-2a90-48e8-bd51-fd01f3c659da, reason: Instance ea469a2e-bf09-495c-9b5e-02ad38d32d40 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  8 06:16:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:16:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:39 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac780031c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:39 np0005475493 nova_compute[262220]: 2025-10-08 10:16:39.278 2 DEBUG oslo_concurrency.lockutils [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:16:39 np0005475493 nova_compute[262220]: 2025-10-08 10:16:39.278 2 DEBUG oslo_concurrency.lockutils [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:16:39 np0005475493 nova_compute[262220]: 2025-10-08 10:16:39.325 2 DEBUG oslo_concurrency.processutils [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:16:39 np0005475493 nova_compute[262220]: 2025-10-08 10:16:39.561 2 DEBUG nova.compute.manager [req-c997eee2-2b9d-4b0b-bd3a-35fc40bff629 req-394a24e4-6831-4fbc-ac37-ac8ddf7993f4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received event network-vif-plugged-be4ec274-2a90-48e8-bd51-fd01f3c659da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:16:39 np0005475493 nova_compute[262220]: 2025-10-08 10:16:39.562 2 DEBUG oslo_concurrency.lockutils [req-c997eee2-2b9d-4b0b-bd3a-35fc40bff629 req-394a24e4-6831-4fbc-ac37-ac8ddf7993f4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:16:39 np0005475493 nova_compute[262220]: 2025-10-08 10:16:39.562 2 DEBUG oslo_concurrency.lockutils [req-c997eee2-2b9d-4b0b-bd3a-35fc40bff629 req-394a24e4-6831-4fbc-ac37-ac8ddf7993f4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:16:39 np0005475493 nova_compute[262220]: 2025-10-08 10:16:39.562 2 DEBUG oslo_concurrency.lockutils [req-c997eee2-2b9d-4b0b-bd3a-35fc40bff629 req-394a24e4-6831-4fbc-ac37-ac8ddf7993f4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:16:39 np0005475493 nova_compute[262220]: 2025-10-08 10:16:39.563 2 DEBUG nova.compute.manager [req-c997eee2-2b9d-4b0b-bd3a-35fc40bff629 req-394a24e4-6831-4fbc-ac37-ac8ddf7993f4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] No waiting events found dispatching network-vif-plugged-be4ec274-2a90-48e8-bd51-fd01f3c659da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  8 06:16:39 np0005475493 nova_compute[262220]: 2025-10-08 10:16:39.563 2 WARNING nova.compute.manager [req-c997eee2-2b9d-4b0b-bd3a-35fc40bff629 req-394a24e4-6831-4fbc-ac37-ac8ddf7993f4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Received unexpected event network-vif-plugged-be4ec274-2a90-48e8-bd51-fd01f3c659da for instance with vm_state deleted and task_state None.#033[00m
Oct  8 06:16:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:39.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:16:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2575423298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:16:39 np0005475493 nova_compute[262220]: 2025-10-08 10:16:39.768 2 DEBUG oslo_concurrency.processutils [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:16:39 np0005475493 nova_compute[262220]: 2025-10-08 10:16:39.774 2 DEBUG nova.compute.provider_tree [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:16:39 np0005475493 nova_compute[262220]: 2025-10-08 10:16:39.802 2 DEBUG nova.scheduler.client.report [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:16:39 np0005475493 nova_compute[262220]: 2025-10-08 10:16:39.827 2 DEBUG oslo_concurrency.lockutils [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:16:39 np0005475493 nova_compute[262220]: 2025-10-08 10:16:39.856 2 INFO nova.scheduler.client.report [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Deleted allocations for instance ea469a2e-bf09-495c-9b5e-02ad38d32d40#033[00m
Oct  8 06:16:40 np0005475493 nova_compute[262220]: 2025-10-08 10:16:40.064 2 DEBUG oslo_concurrency.lockutils [None req-27b10dcf-62fa-454f-b5a2-ce6e4b6c7959 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "ea469a2e-bf09-495c-9b5e-02ad38d32d40" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:16:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v962: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 4.1 KiB/s wr, 57 op/s
Oct  8 06:16:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_47] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:40 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_48] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:16:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:40.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:16:41 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:41 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c004820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:41.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v963: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Oct  8 06:16:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac780031c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:42 np0005475493 nova_compute[262220]: 2025-10-08 10:16:42.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:42 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_47] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:16:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:42.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:16:43 np0005475493 nova_compute[262220]: 2025-10-08 10:16:43.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:43 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:43 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_48] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:16:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:43.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:16:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:16:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v964: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Oct  8 06:16:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac6c004840 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:44 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac780031c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:44.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:44 np0005475493 nova_compute[262220]: 2025-10-08 10:16:44.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:16:44 np0005475493 nova_compute[262220]: 2025-10-08 10:16:44.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:16:45 np0005475493 nova_compute[262220]: 2025-10-08 10:16:45.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:45 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faca0009390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  8 06:16:45 np0005475493 nova_compute[262220]: 2025-10-08 10:16:45.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:45.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:16:45] "GET /metrics HTTP/1.1" 200 48469 "" "Prometheus/2.51.0"
Oct  8 06:16:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:16:45] "GET /metrics HTTP/1.1" 200 48469 "" "Prometheus/2.51.0"
Oct  8 06:16:45 np0005475493 nova_compute[262220]: 2025-10-08 10:16:45.881 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:16:45 np0005475493 nova_compute[262220]: 2025-10-08 10:16:45.946 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:16:45 np0005475493 nova_compute[262220]: 2025-10-08 10:16:45.946 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:16:45 np0005475493 nova_compute[262220]: 2025-10-08 10:16:45.947 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:16:45 np0005475493 podman[278331]: 2025-10-08 10:16:45.956315664 +0000 UTC m=+0.110409528 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:16:45 np0005475493 nova_compute[262220]: 2025-10-08 10:16:45.977 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:16:45 np0005475493 nova_compute[262220]: 2025-10-08 10:16:45.977 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:16:45 np0005475493 nova_compute[262220]: 2025-10-08 10:16:45.977 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:16:45 np0005475493 nova_compute[262220]: 2025-10-08 10:16:45.977 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:16:45 np0005475493 nova_compute[262220]: 2025-10-08 10:16:45.977 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:16:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v965: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct  8 06:16:46 np0005475493 kernel: ganesha.nfsd[277237]: segfault at 50 ip 00007fad51cb432e sp 00007fad1bffe210 error 4 in libntirpc.so.5.8[7fad51c99000+2c000] likely on CPU 5 (core 0, socket 5)
Oct  8 06:16:46 np0005475493 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct  8 06:16:46 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[265037]: 08/10/2025 10:16:46 : epoch 68e637d7 : compute-0 : ganesha.nfsd-2[svc_48] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac940039d0 fd 48 proxy ignored for local
Oct  8 06:16:46 np0005475493 systemd[1]: Started Process Core Dump (PID 278379/UID 0).
Oct  8 06:16:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:16:46 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2115030173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:16:46 np0005475493 nova_compute[262220]: 2025-10-08 10:16:46.440 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:16:46 np0005475493 nova_compute[262220]: 2025-10-08 10:16:46.604 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:16:46 np0005475493 nova_compute[262220]: 2025-10-08 10:16:46.613 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4529MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:16:46 np0005475493 nova_compute[262220]: 2025-10-08 10:16:46.614 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:16:46 np0005475493 nova_compute[262220]: 2025-10-08 10:16:46.614 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:16:46 np0005475493 nova_compute[262220]: 2025-10-08 10:16:46.848 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:16:46 np0005475493 nova_compute[262220]: 2025-10-08 10:16:46.848 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:16:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:46.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:46 np0005475493 nova_compute[262220]: 2025-10-08 10:16:46.873 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:16:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:16:47.169Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:16:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:16:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1774158923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:16:47 np0005475493 nova_compute[262220]: 2025-10-08 10:16:47.302 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:16:47 np0005475493 nova_compute[262220]: 2025-10-08 10:16:47.308 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:16:47 np0005475493 nova_compute[262220]: 2025-10-08 10:16:47.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:47 np0005475493 nova_compute[262220]: 2025-10-08 10:16:47.404 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:16:47 np0005475493 nova_compute[262220]: 2025-10-08 10:16:47.550 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:16:47 np0005475493 nova_compute[262220]: 2025-10-08 10:16:47.551 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:16:47 np0005475493 systemd-coredump[278380]: Process 265041 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 90:#012#0  0x00007fad51cb432e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct  8 06:16:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:16:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:47.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:16:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:16:47
Oct  8 06:16:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:16:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:16:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['.nfs', 'volumes', '.rgw.root', 'images', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'vms', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta']
Oct  8 06:16:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:16:47 np0005475493 systemd[1]: systemd-coredump@10-278379-0.service: Deactivated successfully.
Oct  8 06:16:47 np0005475493 systemd[1]: systemd-coredump@10-278379-0.service: Consumed 1.073s CPU time.
Oct  8 06:16:47 np0005475493 podman[278411]: 2025-10-08 10:16:47.794424047 +0000 UTC m=+0.041476207 container died ff96eb6387ce0570d856673e2f9d2de072dc140ca0ed9b744a95fbf6029b655c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:16:47 np0005475493 systemd[1]: var-lib-containers-storage-overlay-7680890908c887a4af3f6279a54cd446656bf9035c0c45bf7374d576d707e16e-merged.mount: Deactivated successfully.
Oct  8 06:16:47 np0005475493 podman[278411]: 2025-10-08 10:16:47.845912046 +0000 UTC m=+0.092964186 container remove ff96eb6387ce0570d856673e2f9d2de072dc140ca0ed9b744a95fbf6029b655c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:16:47 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Main process exited, code=exited, status=139/n/a
Oct  8 06:16:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:16:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:16:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:16:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:16:48 np0005475493 nova_compute[262220]: 2025-10-08 10:16:48.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:48 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Failed with result 'exit-code'.
Oct  8 06:16:48 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 2.252s CPU time.
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v966: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:16:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:16:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:16:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:48.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.202163) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918609202249, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1465, "num_deletes": 257, "total_data_size": 2777328, "memory_usage": 2828192, "flush_reason": "Manual Compaction"}
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918609217431, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 2696170, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26909, "largest_seqno": 28373, "table_properties": {"data_size": 2689326, "index_size": 3915, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14098, "raw_average_key_size": 19, "raw_value_size": 2675668, "raw_average_value_size": 3695, "num_data_blocks": 172, "num_entries": 724, "num_filter_entries": 724, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759918475, "oldest_key_time": 1759918475, "file_creation_time": 1759918609, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 15296 microseconds, and 6492 cpu microseconds.
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.217471) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 2696170 bytes OK
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.217490) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.218682) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.218700) EVENT_LOG_v1 {"time_micros": 1759918609218694, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.218718) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 2771028, prev total WAL file size 2771028, number of live WAL files 2.
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.219711) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353035' seq:72057594037927935, type:22 .. '6C6F676D00373538' seq:0, type:0; will stop at (end)
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(2632KB)], [59(13MB)]
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918609219814, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 17013496, "oldest_snapshot_seqno": -1}
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 6008 keys, 16864503 bytes, temperature: kUnknown
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918609327166, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 16864503, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16821148, "index_size": 27245, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15045, "raw_key_size": 153034, "raw_average_key_size": 25, "raw_value_size": 16709759, "raw_average_value_size": 2781, "num_data_blocks": 1115, "num_entries": 6008, "num_filter_entries": 6008, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759918609, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.327556) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 16864503 bytes
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.329329) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.4 rd, 157.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 13.7 +0.0 blob) out(16.1 +0.0 blob), read-write-amplify(12.6) write-amplify(6.3) OK, records in: 6540, records dropped: 532 output_compression: NoCompression
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.329406) EVENT_LOG_v1 {"time_micros": 1759918609329378, "job": 32, "event": "compaction_finished", "compaction_time_micros": 107431, "compaction_time_cpu_micros": 35877, "output_level": 6, "num_output_files": 1, "total_output_size": 16864503, "num_input_records": 6540, "num_output_records": 6008, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918609331083, "job": 32, "event": "table_file_deletion", "file_number": 61}
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918609337582, "job": 32, "event": "table_file_deletion", "file_number": 59}
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.219567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.337713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.337723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.337727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.337731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:16:49 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:16:49.337735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:16:49 np0005475493 nova_compute[262220]: 2025-10-08 10:16:49.491 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:16:49 np0005475493 nova_compute[262220]: 2025-10-08 10:16:49.492 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  8 06:16:49 np0005475493 nova_compute[262220]: 2025-10-08 10:16:49.493 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  8 06:16:49 np0005475493 nova_compute[262220]: 2025-10-08 10:16:49.550 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  8 06:16:49 np0005475493 nova_compute[262220]: 2025-10-08 10:16:49.551 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:16:49 np0005475493 nova_compute[262220]: 2025-10-08 10:16:49.551 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:16:49 np0005475493 nova_compute[262220]: 2025-10-08 10:16:49.552 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  8 06:16:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:16:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:49.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:16:49 np0005475493 nova_compute[262220]: 2025-10-08 10:16:49.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:16:50 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v967: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct  8 06:16:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:50.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:51.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:52 np0005475493 nova_compute[262220]: 2025-10-08 10:16:52.261 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759918597.2590716, ea469a2e-bf09-495c-9b5e-02ad38d32d40 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  8 06:16:52 np0005475493 nova_compute[262220]: 2025-10-08 10:16:52.262 2 INFO nova.compute.manager [-] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] VM Stopped (Lifecycle Event)#033[00m
Oct  8 06:16:52 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v968: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct  8 06:16:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [WARNING] 280/101652 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  8 06:16:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [NOTICE] 280/101652 (4) : haproxy version is 2.3.17-d1c9119
Oct  8 06:16:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [NOTICE] 280/101652 (4) : path to executable is /usr/local/sbin/haproxy
Oct  8 06:16:52 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp[96588]: [ALERT] 280/101652 (4) : backend 'backend' has no server available!
Oct  8 06:16:52 np0005475493 nova_compute[262220]: 2025-10-08 10:16:52.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:52 np0005475493 nova_compute[262220]: 2025-10-08 10:16:52.600 2 DEBUG nova.compute.manager [None req-b33200b9-d89a-4310-9cb7-5ce4eec60b55 - - - - - -] [instance: ea469a2e-bf09-495c-9b5e-02ad38d32d40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  8 06:16:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:52.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:52 np0005475493 podman[278484]: 2025-10-08 10:16:52.914949928 +0000 UTC m=+0.065495240 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  8 06:16:52 np0005475493 podman[278485]: 2025-10-08 10:16:52.93359782 +0000 UTC m=+0.082354765 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  8 06:16:53 np0005475493 nova_compute[262220]: 2025-10-08 10:16:53.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:53.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:16:54 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v969: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Oct  8 06:16:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:54.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:55.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:16:55] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct  8 06:16:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:16:55] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct  8 06:16:56 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v970: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct  8 06:16:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:56.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:16:57.169Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:16:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:57.416 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:16:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:57.416 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:16:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:16:57.417 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:16:57 np0005475493 nova_compute[262220]: 2025-10-08 10:16:57.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:57.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:16:58 np0005475493 nova_compute[262220]: 2025-10-08 10:16:58.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:16:58 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v971: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct  8 06:16:58 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Scheduled restart job, restart counter is at 11.
Oct  8 06:16:58 np0005475493 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 06:16:58 np0005475493 systemd[1]: ceph-787292cc-8154-50c4-9e00-e9be3e817149@nfs.cephfs.2.0.compute-0.uynkmx.service: Consumed 2.252s CPU time.
Oct  8 06:16:58 np0005475493 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149...
Oct  8 06:16:58 np0005475493 podman[278576]: 2025-10-08 10:16:58.570844719 +0000 UTC m=+0.051484660 container create 90486abb955ec1d9472e9211269572dd99696faaed865d52f07cc20a187b4c4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:16:58 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f71d56eda20e561c36aebfa14cd6c6b082450f31285702bdb269ddf373f4272/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  8 06:16:58 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f71d56eda20e561c36aebfa14cd6c6b082450f31285702bdb269ddf373f4272/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:16:58 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f71d56eda20e561c36aebfa14cd6c6b082450f31285702bdb269ddf373f4272/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:16:58 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f71d56eda20e561c36aebfa14cd6c6b082450f31285702bdb269ddf373f4272/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.uynkmx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:16:58 np0005475493 podman[278576]: 2025-10-08 10:16:58.54480998 +0000 UTC m=+0.025449951 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:16:58 np0005475493 podman[278576]: 2025-10-08 10:16:58.656180768 +0000 UTC m=+0.136820729 container init 90486abb955ec1d9472e9211269572dd99696faaed865d52f07cc20a187b4c4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  8 06:16:58 np0005475493 podman[278576]: 2025-10-08 10:16:58.661536061 +0000 UTC m=+0.142176002 container start 90486abb955ec1d9472e9211269572dd99696faaed865d52f07cc20a187b4c4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  8 06:16:58 np0005475493 bash[278576]: 90486abb955ec1d9472e9211269572dd99696faaed865d52f07cc20a187b4c4b
Oct  8 06:16:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:16:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  8 06:16:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:16:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  8 06:16:58 np0005475493 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.uynkmx for 787292cc-8154-50c4-9e00-e9be3e817149.
Oct  8 06:16:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:16:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  8 06:16:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:16:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  8 06:16:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:16:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  8 06:16:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:16:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  8 06:16:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:16:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  8 06:16:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:16:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:16:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:16:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:16:58.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:16:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:16:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:16:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:16:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:16:59.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:00 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v972: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 85 B/s wr, 1 op/s
Oct  8 06:17:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:17:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:00.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:17:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:01.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:02 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v973: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 85 B/s wr, 1 op/s
Oct  8 06:17:02 np0005475493 nova_compute[262220]: 2025-10-08 10:17:02.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:17:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:17:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:17:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:02.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:03 np0005475493 nova_compute[262220]: 2025-10-08 10:17:03.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:17:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:03.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:17:04 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v974: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 170 B/s wr, 1 op/s
Oct  8 06:17:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:17:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:04.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:17:04 np0005475493 podman[278640]: 2025-10-08 10:17:04.902192777 +0000 UTC m=+0.060847341 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  8 06:17:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:17:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:17:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:17:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:17:05] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Oct  8 06:17:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:17:05] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Oct  8 06:17:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:17:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:05.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:17:06 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v975: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s
Oct  8 06:17:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:06.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:17:07.170Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:17:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:17:07.170Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:17:07 np0005475493 nova_compute[262220]: 2025-10-08 10:17:07.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:17:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:17:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:07.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:17:08 np0005475493 nova_compute[262220]: 2025-10-08 10:17:08.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:17:08 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v976: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s
Oct  8 06:17:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:17:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:08.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:17:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:17:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:17:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:17:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:17:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:17:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:17:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:09.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:17:10 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v977: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 2 op/s
Oct  8 06:17:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:17:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:10.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:17:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:11.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v978: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 85 B/s wr, 1 op/s
Oct  8 06:17:12 np0005475493 nova_compute[262220]: 2025-10-08 10:17:12.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:17:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:17:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:12.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:17:13 np0005475493 nova_compute[262220]: 2025-10-08 10:17:13.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:17:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:13.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:17:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:17:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:17:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:17:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:17:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v979: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  8 06:17:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:17:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:14.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:17:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:17:15] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Oct  8 06:17:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:17:15] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Oct  8 06:17:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:15.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v980: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  8 06:17:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:16.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:16 np0005475493 podman[278697]: 2025-10-08 10:17:16.929890409 +0000 UTC m=+0.092964386 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  8 06:17:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:17:17.170Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:17:17 np0005475493 nova_compute[262220]: 2025-10-08 10:17:17.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:17:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:17.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:17:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:17:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:17:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:17:18 np0005475493 nova_compute[262220]: 2025-10-08 10:17:18.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:17:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:17:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:17:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:17:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:17:18 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v981: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  8 06:17:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:18.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:17:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:17:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:17:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:17:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:17:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:19.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:20 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v982: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct  8 06:17:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  8 06:17:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/733252192' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  8 06:17:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  8 06:17:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/733252192' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  8 06:17:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:17:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:20.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:17:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:21.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:22 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v983: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct  8 06:17:22 np0005475493 nova_compute[262220]: 2025-10-08 10:17:22.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:17:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:22.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:23 np0005475493 nova_compute[262220]: 2025-10-08 10:17:23.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:17:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:23.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:23 np0005475493 podman[278730]: 2025-10-08 10:17:23.899737937 +0000 UTC m=+0.060734938 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  8 06:17:23 np0005475493 podman[278731]: 2025-10-08 10:17:23.900007795 +0000 UTC m=+0.055594872 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct  8 06:17:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:17:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:17:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:17:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:17:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:17:24 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v984: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct  8 06:17:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:24.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:25 np0005475493 ovn_controller[153187]: 2025-10-08T10:17:25Z|00057|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct  8 06:17:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:17:25] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Oct  8 06:17:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:17:25] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Oct  8 06:17:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:17:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:25.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:17:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v985: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct  8 06:17:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.003000098s ======
Oct  8 06:17:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:26.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000098s
Oct  8 06:17:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:17:27.172Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:17:27 np0005475493 nova_compute[262220]: 2025-10-08 10:17:27.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:17:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:27.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:28 np0005475493 nova_compute[262220]: 2025-10-08 10:17:28.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:17:28 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v986: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct  8 06:17:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:28.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:17:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:17:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:17:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:17:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:17:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:17:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:29.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:17:30 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v987: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct  8 06:17:30 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:17:30.780 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  8 06:17:30 np0005475493 nova_compute[262220]: 2025-10-08 10:17:30.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:17:30 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:17:30.781 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  8 06:17:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:17:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:30.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:17:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:31.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v988: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 96 op/s
Oct  8 06:17:32 np0005475493 nova_compute[262220]: 2025-10-08 10:17:32.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:17:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:17:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:17:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:32.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:33 np0005475493 nova_compute[262220]: 2025-10-08 10:17:33.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:17:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:33.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:17:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:17:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:17:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:17:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:17:34 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v989: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 96 op/s
Oct  8 06:17:34 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:17:34.783 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:17:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:17:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:34.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:17:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:17:35] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:17:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:17:35] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:17:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:35.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:35 np0005475493 podman[278806]: 2025-10-08 10:17:35.891859002 +0000 UTC m=+0.057635027 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  8 06:17:36 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v990: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  8 06:17:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:36.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:17:37.173Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:17:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:17:37.173Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:17:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:17:37.173Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:17:37 np0005475493 nova_compute[262220]: 2025-10-08 10:17:37.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:17:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:37.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:38 np0005475493 nova_compute[262220]: 2025-10-08 10:17:38.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:17:38 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v991: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  8 06:17:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:38.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:17:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:17:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:17:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:17:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:17:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:17:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:17:38 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:17:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:17:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:17:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:17:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:17:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:17:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:17:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:17:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:17:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:17:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:17:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:17:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:17:39 np0005475493 podman[279008]: 2025-10-08 10:17:39.522766529 +0000 UTC m=+0.029519171 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:17:39 np0005475493 podman[279008]: 2025-10-08 10:17:39.650272937 +0000 UTC m=+0.157025559 container create de16f3c78ea15a6c7100d47f3bc65eb39d11d0aab58fe6e803d8b0c9676de68f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:17:39 np0005475493 systemd[1]: Started libpod-conmon-de16f3c78ea15a6c7100d47f3bc65eb39d11d0aab58fe6e803d8b0c9676de68f.scope.
Oct  8 06:17:39 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:17:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:17:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:39.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:17:39 np0005475493 podman[279008]: 2025-10-08 10:17:39.837955222 +0000 UTC m=+0.344707874 container init de16f3c78ea15a6c7100d47f3bc65eb39d11d0aab58fe6e803d8b0c9676de68f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_elbakyan, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  8 06:17:39 np0005475493 podman[279008]: 2025-10-08 10:17:39.845149705 +0000 UTC m=+0.351902327 container start de16f3c78ea15a6c7100d47f3bc65eb39d11d0aab58fe6e803d8b0c9676de68f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:17:39 np0005475493 systemd[1]: libpod-de16f3c78ea15a6c7100d47f3bc65eb39d11d0aab58fe6e803d8b0c9676de68f.scope: Deactivated successfully.
Oct  8 06:17:39 np0005475493 cool_elbakyan[279024]: 167 167
Oct  8 06:17:39 np0005475493 conmon[279024]: conmon de16f3c78ea15a6c7100 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-de16f3c78ea15a6c7100d47f3bc65eb39d11d0aab58fe6e803d8b0c9676de68f.scope/container/memory.events
Oct  8 06:17:39 np0005475493 podman[279008]: 2025-10-08 10:17:39.908487175 +0000 UTC m=+0.415239797 container attach de16f3c78ea15a6c7100d47f3bc65eb39d11d0aab58fe6e803d8b0c9676de68f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  8 06:17:39 np0005475493 podman[279008]: 2025-10-08 10:17:39.911416819 +0000 UTC m=+0.418169441 container died de16f3c78ea15a6c7100d47f3bc65eb39d11d0aab58fe6e803d8b0c9676de68f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_elbakyan, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:17:40 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:17:40 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:17:40 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:17:40 np0005475493 systemd[1]: var-lib-containers-storage-overlay-cd4f7fcb379d97d6a5d82d342c3337ff457226c593bad8aaecaf76e5c30f4e5a-merged.mount: Deactivated successfully.
Oct  8 06:17:40 np0005475493 podman[279008]: 2025-10-08 10:17:40.186512411 +0000 UTC m=+0.693265043 container remove de16f3c78ea15a6c7100d47f3bc65eb39d11d0aab58fe6e803d8b0c9676de68f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  8 06:17:40 np0005475493 systemd[1]: libpod-conmon-de16f3c78ea15a6c7100d47f3bc65eb39d11d0aab58fe6e803d8b0c9676de68f.scope: Deactivated successfully.
Oct  8 06:17:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v992: 353 pgs: 353 active+clean; 82 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.3 MiB/s wr, 42 op/s
Oct  8 06:17:40 np0005475493 podman[279050]: 2025-10-08 10:17:40.329342192 +0000 UTC m=+0.026099752 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:17:40 np0005475493 podman[279050]: 2025-10-08 10:17:40.427627368 +0000 UTC m=+0.124384898 container create ea89eea5734e94bdd3153cb0de33c2002b50ea27201ca6847a73ec7ff6f302a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ganguly, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  8 06:17:40 np0005475493 systemd[1]: Started libpod-conmon-ea89eea5734e94bdd3153cb0de33c2002b50ea27201ca6847a73ec7ff6f302a6.scope.
Oct  8 06:17:40 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:17:40 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f51425f82106070af1b8a68285ec075e36fbc48eb2c537642f021f0a0b24981/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:17:40 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f51425f82106070af1b8a68285ec075e36fbc48eb2c537642f021f0a0b24981/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:17:40 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f51425f82106070af1b8a68285ec075e36fbc48eb2c537642f021f0a0b24981/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:17:40 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f51425f82106070af1b8a68285ec075e36fbc48eb2c537642f021f0a0b24981/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:17:40 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f51425f82106070af1b8a68285ec075e36fbc48eb2c537642f021f0a0b24981/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:17:40 np0005475493 podman[279050]: 2025-10-08 10:17:40.535488423 +0000 UTC m=+0.232245953 container init ea89eea5734e94bdd3153cb0de33c2002b50ea27201ca6847a73ec7ff6f302a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ganguly, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  8 06:17:40 np0005475493 podman[279050]: 2025-10-08 10:17:40.543463129 +0000 UTC m=+0.240220659 container start ea89eea5734e94bdd3153cb0de33c2002b50ea27201ca6847a73ec7ff6f302a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  8 06:17:40 np0005475493 podman[279050]: 2025-10-08 10:17:40.562568406 +0000 UTC m=+0.259325936 container attach ea89eea5734e94bdd3153cb0de33c2002b50ea27201ca6847a73ec7ff6f302a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:17:40 np0005475493 recursing_ganguly[279067]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:17:40 np0005475493 recursing_ganguly[279067]: --> All data devices are unavailable
Oct  8 06:17:40 np0005475493 systemd[1]: libpod-ea89eea5734e94bdd3153cb0de33c2002b50ea27201ca6847a73ec7ff6f302a6.scope: Deactivated successfully.
Oct  8 06:17:40 np0005475493 podman[279050]: 2025-10-08 10:17:40.894432376 +0000 UTC m=+0.591189906 container died ea89eea5734e94bdd3153cb0de33c2002b50ea27201ca6847a73ec7ff6f302a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  8 06:17:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:40.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:40 np0005475493 systemd[1]: var-lib-containers-storage-overlay-5f51425f82106070af1b8a68285ec075e36fbc48eb2c537642f021f0a0b24981-merged.mount: Deactivated successfully.
Oct  8 06:17:41 np0005475493 podman[279050]: 2025-10-08 10:17:41.034669954 +0000 UTC m=+0.731427484 container remove ea89eea5734e94bdd3153cb0de33c2002b50ea27201ca6847a73ec7ff6f302a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ganguly, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:17:41 np0005475493 systemd[1]: libpod-conmon-ea89eea5734e94bdd3153cb0de33c2002b50ea27201ca6847a73ec7ff6f302a6.scope: Deactivated successfully.
Oct  8 06:17:41 np0005475493 podman[279185]: 2025-10-08 10:17:41.637242075 +0000 UTC m=+0.075872615 container create df97a4060bd16281427ab3f2b537ff7a2bcc6bedd4cc2fa8c86e04e0d0403e58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  8 06:17:41 np0005475493 podman[279185]: 2025-10-08 10:17:41.586945825 +0000 UTC m=+0.025576405 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:17:41 np0005475493 systemd[1]: Started libpod-conmon-df97a4060bd16281427ab3f2b537ff7a2bcc6bedd4cc2fa8c86e04e0d0403e58.scope.
Oct  8 06:17:41 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:17:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:17:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:41.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:17:41 np0005475493 podman[279185]: 2025-10-08 10:17:41.853739349 +0000 UTC m=+0.292369919 container init df97a4060bd16281427ab3f2b537ff7a2bcc6bedd4cc2fa8c86e04e0d0403e58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:17:41 np0005475493 podman[279185]: 2025-10-08 10:17:41.867676588 +0000 UTC m=+0.306307128 container start df97a4060bd16281427ab3f2b537ff7a2bcc6bedd4cc2fa8c86e04e0d0403e58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_ritchie, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:17:41 np0005475493 inspiring_ritchie[279203]: 167 167
Oct  8 06:17:41 np0005475493 systemd[1]: libpod-df97a4060bd16281427ab3f2b537ff7a2bcc6bedd4cc2fa8c86e04e0d0403e58.scope: Deactivated successfully.
Oct  8 06:17:41 np0005475493 conmon[279203]: conmon df97a4060bd16281427a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df97a4060bd16281427ab3f2b537ff7a2bcc6bedd4cc2fa8c86e04e0d0403e58.scope/container/memory.events
Oct  8 06:17:41 np0005475493 podman[279185]: 2025-10-08 10:17:41.87922093 +0000 UTC m=+0.317851500 container attach df97a4060bd16281427ab3f2b537ff7a2bcc6bedd4cc2fa8c86e04e0d0403e58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  8 06:17:41 np0005475493 podman[279185]: 2025-10-08 10:17:41.880566054 +0000 UTC m=+0.319196594 container died df97a4060bd16281427ab3f2b537ff7a2bcc6bedd4cc2fa8c86e04e0d0403e58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  8 06:17:42 np0005475493 systemd[1]: var-lib-containers-storage-overlay-f1c29ffbd1250596e9da410bdfb5815131dc55cdb2048a318527a6a22d8b5aa5-merged.mount: Deactivated successfully.
Oct  8 06:17:42 np0005475493 podman[279185]: 2025-10-08 10:17:42.191457768 +0000 UTC m=+0.630088308 container remove df97a4060bd16281427ab3f2b537ff7a2bcc6bedd4cc2fa8c86e04e0d0403e58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  8 06:17:42 np0005475493 systemd[1]: libpod-conmon-df97a4060bd16281427ab3f2b537ff7a2bcc6bedd4cc2fa8c86e04e0d0403e58.scope: Deactivated successfully.
Oct  8 06:17:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v993: 353 pgs: 353 active+clean; 82 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 1.3 MiB/s wr, 15 op/s
Oct  8 06:17:42 np0005475493 podman[279227]: 2025-10-08 10:17:42.406482715 +0000 UTC m=+0.084532724 container create 01909e937007be4d214389753978e70ce1f1d29d668ff02aaa4b881a6641ec87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:17:42 np0005475493 nova_compute[262220]: 2025-10-08 10:17:42.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:17:42 np0005475493 podman[279227]: 2025-10-08 10:17:42.351172974 +0000 UTC m=+0.029223003 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:17:42 np0005475493 systemd[1]: Started libpod-conmon-01909e937007be4d214389753978e70ce1f1d29d668ff02aaa4b881a6641ec87.scope.
Oct  8 06:17:42 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:17:42 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6f9993a854070d45b27220b013b82bc5fffa457855230ef44159692780eb43/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:17:42 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6f9993a854070d45b27220b013b82bc5fffa457855230ef44159692780eb43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:17:42 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6f9993a854070d45b27220b013b82bc5fffa457855230ef44159692780eb43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:17:42 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6f9993a854070d45b27220b013b82bc5fffa457855230ef44159692780eb43/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:17:42 np0005475493 podman[279227]: 2025-10-08 10:17:42.554523674 +0000 UTC m=+0.232573703 container init 01909e937007be4d214389753978e70ce1f1d29d668ff02aaa4b881a6641ec87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_brahmagupta, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct  8 06:17:42 np0005475493 podman[279227]: 2025-10-08 10:17:42.561194229 +0000 UTC m=+0.239244238 container start 01909e937007be4d214389753978e70ce1f1d29d668ff02aaa4b881a6641ec87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  8 06:17:42 np0005475493 podman[279227]: 2025-10-08 10:17:42.61740651 +0000 UTC m=+0.295456549 container attach 01909e937007be4d214389753978e70ce1f1d29d668ff02aaa4b881a6641ec87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_brahmagupta, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]: {
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:    "1": [
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:        {
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:            "devices": [
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:                "/dev/loop3"
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:            ],
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:            "lv_name": "ceph_lv0",
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:            "lv_size": "21470642176",
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:            "name": "ceph_lv0",
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:            "tags": {
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:                "ceph.cluster_name": "ceph",
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:                "ceph.crush_device_class": "",
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:                "ceph.encrypted": "0",
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:                "ceph.osd_id": "1",
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:                "ceph.type": "block",
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:                "ceph.vdo": "0",
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:                "ceph.with_tpm": "0"
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:            },
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:            "type": "block",
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:            "vg_name": "ceph_vg0"
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:        }
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]:    ]
Oct  8 06:17:42 np0005475493 sharp_brahmagupta[279244]: }
Oct  8 06:17:42 np0005475493 systemd[1]: libpod-01909e937007be4d214389753978e70ce1f1d29d668ff02aaa4b881a6641ec87.scope: Deactivated successfully.
Oct  8 06:17:42 np0005475493 podman[279253]: 2025-10-08 10:17:42.904357443 +0000 UTC m=+0.026519074 container died 01909e937007be4d214389753978e70ce1f1d29d668ff02aaa4b881a6641ec87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:17:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:42.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:42 np0005475493 systemd[1]: var-lib-containers-storage-overlay-ef6f9993a854070d45b27220b013b82bc5fffa457855230ef44159692780eb43-merged.mount: Deactivated successfully.
Oct  8 06:17:43 np0005475493 podman[279253]: 2025-10-08 10:17:43.06442777 +0000 UTC m=+0.186589371 container remove 01909e937007be4d214389753978e70ce1f1d29d668ff02aaa4b881a6641ec87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_brahmagupta, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  8 06:17:43 np0005475493 systemd[1]: libpod-conmon-01909e937007be4d214389753978e70ce1f1d29d668ff02aaa4b881a6641ec87.scope: Deactivated successfully.
Oct  8 06:17:43 np0005475493 nova_compute[262220]: 2025-10-08 10:17:43.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:17:43 np0005475493 podman[279360]: 2025-10-08 10:17:43.673722068 +0000 UTC m=+0.057391329 container create bd4bd911b8fc43caf636c9704021d70dce67511b1bc7f320adf16afc923ccc76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:17:43 np0005475493 systemd[1]: Started libpod-conmon-bd4bd911b8fc43caf636c9704021d70dce67511b1bc7f320adf16afc923ccc76.scope.
Oct  8 06:17:43 np0005475493 podman[279360]: 2025-10-08 10:17:43.641327505 +0000 UTC m=+0.024996796 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:17:43 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:17:43 np0005475493 podman[279360]: 2025-10-08 10:17:43.780216978 +0000 UTC m=+0.163886259 container init bd4bd911b8fc43caf636c9704021d70dce67511b1bc7f320adf16afc923ccc76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_pasteur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:17:43 np0005475493 podman[279360]: 2025-10-08 10:17:43.792232236 +0000 UTC m=+0.175901497 container start bd4bd911b8fc43caf636c9704021d70dce67511b1bc7f320adf16afc923ccc76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  8 06:17:43 np0005475493 clever_pasteur[279376]: 167 167
Oct  8 06:17:43 np0005475493 systemd[1]: libpod-bd4bd911b8fc43caf636c9704021d70dce67511b1bc7f320adf16afc923ccc76.scope: Deactivated successfully.
Oct  8 06:17:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:43.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:43 np0005475493 podman[279360]: 2025-10-08 10:17:43.821396446 +0000 UTC m=+0.205065747 container attach bd4bd911b8fc43caf636c9704021d70dce67511b1bc7f320adf16afc923ccc76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_pasteur, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:17:43 np0005475493 podman[279360]: 2025-10-08 10:17:43.822587134 +0000 UTC m=+0.206256395 container died bd4bd911b8fc43caf636c9704021d70dce67511b1bc7f320adf16afc923ccc76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True)
Oct  8 06:17:43 np0005475493 systemd[1]: var-lib-containers-storage-overlay-88aacc33048058288e60e6c19667588cbc1d125a1384724b7e38fc639b12fb67-merged.mount: Deactivated successfully.
Oct  8 06:17:43 np0005475493 podman[279360]: 2025-10-08 10:17:43.945120421 +0000 UTC m=+0.328789692 container remove bd4bd911b8fc43caf636c9704021d70dce67511b1bc7f320adf16afc923ccc76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  8 06:17:43 np0005475493 systemd[1]: libpod-conmon-bd4bd911b8fc43caf636c9704021d70dce67511b1bc7f320adf16afc923ccc76.scope: Deactivated successfully.
Oct  8 06:17:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:17:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:17:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:17:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:17:44 np0005475493 podman[279399]: 2025-10-08 10:17:44.122224746 +0000 UTC m=+0.045003251 container create da398a7c7c6f37ac8dca9d8f85e2151edc4257bc43a24f32948cb9e3142f2f51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_cray, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct  8 06:17:44 np0005475493 systemd[1]: Started libpod-conmon-da398a7c7c6f37ac8dca9d8f85e2151edc4257bc43a24f32948cb9e3142f2f51.scope.
Oct  8 06:17:44 np0005475493 podman[279399]: 2025-10-08 10:17:44.101792668 +0000 UTC m=+0.024571193 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:17:44 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:17:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf772530f6e6caa80c59be9d1339e8f1b905444c997dc7750fa6506603200d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:17:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf772530f6e6caa80c59be9d1339e8f1b905444c997dc7750fa6506603200d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:17:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf772530f6e6caa80c59be9d1339e8f1b905444c997dc7750fa6506603200d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:17:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf772530f6e6caa80c59be9d1339e8f1b905444c997dc7750fa6506603200d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:17:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:17:44 np0005475493 podman[279399]: 2025-10-08 10:17:44.22135606 +0000 UTC m=+0.144134585 container init da398a7c7c6f37ac8dca9d8f85e2151edc4257bc43a24f32948cb9e3142f2f51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_cray, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  8 06:17:44 np0005475493 podman[279399]: 2025-10-08 10:17:44.228258492 +0000 UTC m=+0.151036997 container start da398a7c7c6f37ac8dca9d8f85e2151edc4257bc43a24f32948cb9e3142f2f51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_cray, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  8 06:17:44 np0005475493 podman[279399]: 2025-10-08 10:17:44.232019443 +0000 UTC m=+0.154797958 container attach da398a7c7c6f37ac8dca9d8f85e2151edc4257bc43a24f32948cb9e3142f2f51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_cray, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  8 06:17:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v994: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct  8 06:17:44 np0005475493 nova_compute[262220]: 2025-10-08 10:17:44.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:17:44 np0005475493 lvm[279490]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:17:44 np0005475493 lvm[279490]: VG ceph_vg0 finished
Oct  8 06:17:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:44.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:44 np0005475493 boring_cray[279416]: {}
Oct  8 06:17:44 np0005475493 systemd[1]: libpod-da398a7c7c6f37ac8dca9d8f85e2151edc4257bc43a24f32948cb9e3142f2f51.scope: Deactivated successfully.
Oct  8 06:17:44 np0005475493 podman[279399]: 2025-10-08 10:17:44.969181641 +0000 UTC m=+0.891960146 container died da398a7c7c6f37ac8dca9d8f85e2151edc4257bc43a24f32948cb9e3142f2f51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_cray, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  8 06:17:44 np0005475493 systemd[1]: libpod-da398a7c7c6f37ac8dca9d8f85e2151edc4257bc43a24f32948cb9e3142f2f51.scope: Consumed 1.201s CPU time.
Oct  8 06:17:45 np0005475493 systemd[1]: var-lib-containers-storage-overlay-ccf772530f6e6caa80c59be9d1339e8f1b905444c997dc7750fa6506603200d3-merged.mount: Deactivated successfully.
Oct  8 06:17:45 np0005475493 podman[279399]: 2025-10-08 10:17:45.021613219 +0000 UTC m=+0.944391714 container remove da398a7c7c6f37ac8dca9d8f85e2151edc4257bc43a24f32948cb9e3142f2f51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_cray, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  8 06:17:45 np0005475493 systemd[1]: libpod-conmon-da398a7c7c6f37ac8dca9d8f85e2151edc4257bc43a24f32948cb9e3142f2f51.scope: Deactivated successfully.
Oct  8 06:17:45 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:17:45 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:17:45 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:17:45 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:17:45 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:17:45 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:17:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:17:45] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:17:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:17:45] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:17:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:17:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:45.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:17:45 np0005475493 nova_compute[262220]: 2025-10-08 10:17:45.888 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:17:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v995: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct  8 06:17:46 np0005475493 nova_compute[262220]: 2025-10-08 10:17:46.882 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:17:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:46.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:17:47.174Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:17:47 np0005475493 nova_compute[262220]: 2025-10-08 10:17:47.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:17:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:17:47
Oct  8 06:17:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:17:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:17:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'backups', '.nfs', '.mgr', 'volumes', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta']
Oct  8 06:17:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:17:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:47.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:17:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:17:47 np0005475493 nova_compute[262220]: 2025-10-08 10:17:47.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:17:47 np0005475493 nova_compute[262220]: 2025-10-08 10:17:47.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:17:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:17:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:17:47 np0005475493 podman[279535]: 2025-10-08 10:17:47.92690404 +0000 UTC m=+0.082494798 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller)
Oct  8 06:17:47 np0005475493 nova_compute[262220]: 2025-10-08 10:17:47.988 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:17:47 np0005475493 nova_compute[262220]: 2025-10-08 10:17:47.988 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:17:47 np0005475493 nova_compute[262220]: 2025-10-08 10:17:47.989 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:17:47 np0005475493 nova_compute[262220]: 2025-10-08 10:17:47.990 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:17:47 np0005475493 nova_compute[262220]: 2025-10-08 10:17:47.990 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:17:48 np0005475493 nova_compute[262220]: 2025-10-08 10:17:48.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v996: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:17:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:17:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:17:48 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/705253964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:17:48 np0005475493 nova_compute[262220]: 2025-10-08 10:17:48.482 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:17:48 np0005475493 nova_compute[262220]: 2025-10-08 10:17:48.692 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:17:48 np0005475493 nova_compute[262220]: 2025-10-08 10:17:48.693 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4525MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:17:48 np0005475493 nova_compute[262220]: 2025-10-08 10:17:48.694 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:17:48 np0005475493 nova_compute[262220]: 2025-10-08 10:17:48.694 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:17:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.002000066s ======
Oct  8 06:17:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:48.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000066s
Oct  8 06:17:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:17:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:17:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:17:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:17:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:17:49 np0005475493 nova_compute[262220]: 2025-10-08 10:17:49.564 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:17:49 np0005475493 nova_compute[262220]: 2025-10-08 10:17:49.565 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:17:49 np0005475493 nova_compute[262220]: 2025-10-08 10:17:49.584 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:17:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:17:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:49.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:17:49 np0005475493 ceph-mgr[73869]: [devicehealth INFO root] Check health
Oct  8 06:17:50 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:17:50 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/494990636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:17:50 np0005475493 nova_compute[262220]: 2025-10-08 10:17:50.041 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:17:50 np0005475493 nova_compute[262220]: 2025-10-08 10:17:50.046 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:17:50 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v997: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct  8 06:17:50 np0005475493 nova_compute[262220]: 2025-10-08 10:17:50.298 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:17:50 np0005475493 nova_compute[262220]: 2025-10-08 10:17:50.299 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:17:50 np0005475493 nova_compute[262220]: 2025-10-08 10:17:50.300 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:17:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:17:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:50.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:17:51 np0005475493 nova_compute[262220]: 2025-10-08 10:17:51.300 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:17:51 np0005475493 nova_compute[262220]: 2025-10-08 10:17:51.300 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  8 06:17:51 np0005475493 nova_compute[262220]: 2025-10-08 10:17:51.301 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  8 06:17:51 np0005475493 nova_compute[262220]: 2025-10-08 10:17:51.348 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  8 06:17:51 np0005475493 nova_compute[262220]: 2025-10-08 10:17:51.349 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:17:51 np0005475493 nova_compute[262220]: 2025-10-08 10:17:51.349 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:17:51 np0005475493 nova_compute[262220]: 2025-10-08 10:17:51.349 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  8 06:17:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:51.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:51 np0005475493 nova_compute[262220]: 2025-10-08 10:17:51.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:17:52 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v998: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 485 KiB/s wr, 87 op/s
Oct  8 06:17:52 np0005475493 nova_compute[262220]: 2025-10-08 10:17:52.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:17:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:17:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:52.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:17:53 np0005475493 nova_compute[262220]: 2025-10-08 10:17:53.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:17:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:53.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:17:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:17:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:17:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:17:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:17:54 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v999: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 486 KiB/s wr, 112 op/s
Oct  8 06:17:54 np0005475493 podman[279636]: 2025-10-08 10:17:54.897554851 +0000 UTC m=+0.056199482 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  8 06:17:54 np0005475493 podman[279637]: 2025-10-08 10:17:54.899475653 +0000 UTC m=+0.056506811 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  8 06:17:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:54.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:17:55] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Oct  8 06:17:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:17:55] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Oct  8 06:17:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:55.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:17:56 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1000: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 94 op/s
Oct  8 06:17:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:17:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:56.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:17:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:17:57.175Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:17:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:17:57.416 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:17:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:17:57.417 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:17:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:17:57.417 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:17:57 np0005475493 nova_compute[262220]: 2025-10-08 10:17:57.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:17:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:17:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:57.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:17:58 np0005475493 nova_compute[262220]: 2025-10-08 10:17:58.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:17:58 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1001: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 94 op/s
Oct  8 06:17:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:17:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:17:58.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:17:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:17:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:17:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:17:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:17:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:17:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:17:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:17:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:17:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:17:59.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:00 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1002: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 96 op/s
Oct  8 06:18:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:00.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:18:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:01.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:18:02 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1003: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  8 06:18:02 np0005475493 nova_compute[262220]: 2025-10-08 10:18:02.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:18:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:18:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:18:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:18:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:02.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:18:03 np0005475493 nova_compute[262220]: 2025-10-08 10:18:03.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:18:03 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Oct  8 06:18:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:03.446583) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  8 06:18:03 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Oct  8 06:18:03 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918683446662, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1164, "num_deletes": 501, "total_data_size": 1412293, "memory_usage": 1445744, "flush_reason": "Manual Compaction"}
Oct  8 06:18:03 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Oct  8 06:18:03 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918683516808, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 1006345, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28374, "largest_seqno": 29537, "table_properties": {"data_size": 1001883, "index_size": 1538, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14512, "raw_average_key_size": 19, "raw_value_size": 990453, "raw_average_value_size": 1336, "num_data_blocks": 67, "num_entries": 741, "num_filter_entries": 741, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759918609, "oldest_key_time": 1759918609, "file_creation_time": 1759918683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:18:03 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 70275 microseconds, and 3709 cpu microseconds.
Oct  8 06:18:03 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:18:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:03.516863) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 1006345 bytes OK
Oct  8 06:18:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:03.516891) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Oct  8 06:18:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:03.752006) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Oct  8 06:18:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:03.752098) EVENT_LOG_v1 {"time_micros": 1759918683752089, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  8 06:18:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:03.752122) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  8 06:18:03 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1405960, prev total WAL file size 1405960, number of live WAL files 2.
Oct  8 06:18:03 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:18:03 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:03.753767) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct  8 06:18:03 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  8 06:18:03 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(982KB)], [62(16MB)]
Oct  8 06:18:03 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918683753885, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 17870848, "oldest_snapshot_seqno": -1}
Oct  8 06:18:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:03.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:03 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5756 keys, 12129461 bytes, temperature: kUnknown
Oct  8 06:18:03 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918683947878, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 12129461, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12093108, "index_size": 20883, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14405, "raw_key_size": 148954, "raw_average_key_size": 25, "raw_value_size": 11991340, "raw_average_value_size": 2083, "num_data_blocks": 834, "num_entries": 5756, "num_filter_entries": 5756, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759918683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:18:03 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:18:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:18:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:18:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:18:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:18:04 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:03.948155) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 12129461 bytes
Oct  8 06:18:04 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:04.100612) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 92.1 rd, 62.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 16.1 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(29.8) write-amplify(12.1) OK, records in: 6749, records dropped: 993 output_compression: NoCompression
Oct  8 06:18:04 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:04.100671) EVENT_LOG_v1 {"time_micros": 1759918684100650, "job": 34, "event": "compaction_finished", "compaction_time_micros": 194062, "compaction_time_cpu_micros": 50409, "output_level": 6, "num_output_files": 1, "total_output_size": 12129461, "num_input_records": 6749, "num_output_records": 5756, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  8 06:18:04 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:18:04 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918684101337, "job": 34, "event": "table_file_deletion", "file_number": 64}
Oct  8 06:18:04 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:18:04 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918684106218, "job": 34, "event": "table_file_deletion", "file_number": 62}
Oct  8 06:18:04 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:03.753529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:18:04 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:04.106334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:18:04 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:04.106340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:18:04 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:04.106342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:18:04 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:04.106343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:18:04 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:18:04.106345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:18:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:18:04 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1004: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  8 06:18:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:18:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:04.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:18:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:18:05] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct  8 06:18:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:18:05] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct  8 06:18:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:18:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:05.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:18:06 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1005: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 2 op/s
Oct  8 06:18:06 np0005475493 podman[279685]: 2025-10-08 10:18:06.920088905 +0000 UTC m=+0.068837359 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  8 06:18:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:06.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:18:07.176Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:18:07 np0005475493 nova_compute[262220]: 2025-10-08 10:18:07.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:18:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:18:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:07.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:18:08 np0005475493 nova_compute[262220]: 2025-10-08 10:18:08.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:18:08 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1006: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 2 op/s
Oct  8 06:18:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:08.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:18:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:18:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:18:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:18:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:18:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=plugins.update.checker t=2025-10-08T10:18:09.566221338Z level=info msg="Update check succeeded" duration=53.637388ms
Oct  8 06:18:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=grafana.update.checker t=2025-10-08T10:18:09.629643791Z level=info msg="Update check succeeded" duration=117.01454ms
Oct  8 06:18:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=cleanup t=2025-10-08T10:18:09.685566033Z level=info msg="Completed cleanup jobs" duration=250.462248ms
Oct  8 06:18:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:09.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:10 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1007: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 170 B/s wr, 2 op/s
Oct  8 06:18:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:18:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:10.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:18:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:11.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1008: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:18:12 np0005475493 nova_compute[262220]: 2025-10-08 10:18:12.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:18:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:12.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:13 np0005475493 nova_compute[262220]: 2025-10-08 10:18:13.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:18:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:13.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:18:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:18:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:18:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:18:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:18:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1009: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:18:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:14.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:18:15] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct  8 06:18:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:18:15] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct  8 06:18:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:18:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:15.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:18:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1010: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:18:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:18:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:16.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:18:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:18:17.176Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:18:17 np0005475493 nova_compute[262220]: 2025-10-08 10:18:17.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:18:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:18:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:18:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:17.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:18:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:18:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:18:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:18:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:18:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:18:18 np0005475493 nova_compute[262220]: 2025-10-08 10:18:18.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:18:18 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1011: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:18:18 np0005475493 podman[279742]: 2025-10-08 10:18:18.925073327 +0000 UTC m=+0.084985576 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  8 06:18:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:18.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:18:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:18:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:18:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:18:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:18:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:19.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:20 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1012: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:18:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:18:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:20.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:18:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:21.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:22 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1013: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:18:22 np0005475493 nova_compute[262220]: 2025-10-08 10:18:22.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:18:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:22.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:23 np0005475493 nova_compute[262220]: 2025-10-08 10:18:23.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:18:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:23.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:18:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:18:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:18:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:18:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:18:24 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1014: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:18:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:24.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:18:25] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:18:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:18:25] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:18:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:18:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:25.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:18:25 np0005475493 podman[279775]: 2025-10-08 10:18:25.898692351 +0000 UTC m=+0.059730480 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  8 06:18:25 np0005475493 podman[279776]: 2025-10-08 10:18:25.898209855 +0000 UTC m=+0.053653344 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  8 06:18:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1015: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:18:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:18:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:26.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:18:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:18:27.178Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:18:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:18:27.178Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:18:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:18:27.178Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:18:27 np0005475493 nova_compute[262220]: 2025-10-08 10:18:27.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:18:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:27.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:28 np0005475493 nova_compute[262220]: 2025-10-08 10:18:28.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:18:28 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1016: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:18:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:28.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:18:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:18:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:18:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:18:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:18:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:29.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:30 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1017: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:18:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:30.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:18:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:31.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:18:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1018: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:18:32 np0005475493 nova_compute[262220]: 2025-10-08 10:18:32.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:18:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:18:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:18:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:18:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:32.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:18:33 np0005475493 nova_compute[262220]: 2025-10-08 10:18:33.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:18:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:33.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:18:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:18:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:18:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:18:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:18:34 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1019: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  8 06:18:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:18:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:34.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:18:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:18:35] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct  8 06:18:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:18:35] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct  8 06:18:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:18:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:35.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:18:36 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1020: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  8 06:18:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:36.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:18:37.179Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:18:37 np0005475493 nova_compute[262220]: 2025-10-08 10:18:37.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:18:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:37 np0005475493 podman[279851]: 2025-10-08 10:18:37.902104129 +0000 UTC m=+0.058092296 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  8 06:18:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:18:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:37.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:18:38 np0005475493 nova_compute[262220]: 2025-10-08 10:18:38.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:18:38 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1021: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  8 06:18:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:38.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:18:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:18:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:18:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:18:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:18:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:18:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:39.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:18:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1022: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct  8 06:18:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:18:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:40.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:18:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:18:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:41.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:18:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1023: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct  8 06:18:42 np0005475493 nova_compute[262220]: 2025-10-08 10:18:42.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:18:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:42.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:43 np0005475493 nova_compute[262220]: 2025-10-08 10:18:43.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:18:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:18:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:43.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:18:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:18:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:18:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:18:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:18:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:18:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1024: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct  8 06:18:44 np0005475493 nova_compute[262220]: 2025-10-08 10:18:44.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:18:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:18:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:44.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:18:45 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:18:45.283 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  8 06:18:45 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:18:45.283 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  8 06:18:45 np0005475493 nova_compute[262220]: 2025-10-08 10:18:45.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:18:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:18:45] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct  8 06:18:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:18:45] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct  8 06:18:45 np0005475493 nova_compute[262220]: 2025-10-08 10:18:45.882 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:18:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:45.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1025: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct  8 06:18:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:18:46 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:18:46 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct  8 06:18:46 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  8 06:18:46 np0005475493 nova_compute[262220]: 2025-10-08 10:18:46.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:18:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.002000065s ======
Oct  8 06:18:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:46.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000065s
Oct  8 06:18:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:18:47.180Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:18:47 np0005475493 nova_compute[262220]: 2025-10-08 10:18:47.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:18:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 06:18:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 06:18:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:47 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:47 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:47 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  8 06:18:47 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:18:47
Oct  8 06:18:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:18:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:18:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', '.nfs', '.mgr', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'backups']
Oct  8 06:18:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:18:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:18:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:18:47 np0005475493 nova_compute[262220]: 2025-10-08 10:18:47.884 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:18:47 np0005475493 nova_compute[262220]: 2025-10-08 10:18:47.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:18:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:47.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:18:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:18:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 06:18:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 06:18:48 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:18:48 np0005475493 nova_compute[262220]: 2025-10-08 10:18:48.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:18:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1026: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct  8 06:18:48 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:48 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:48 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:48 np0005475493 nova_compute[262220]: 2025-10-08 10:18:48.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:18:48 np0005475493 nova_compute[262220]: 2025-10-08 10:18:48.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  8 06:18:48 np0005475493 nova_compute[262220]: 2025-10-08 10:18:48.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  8 06:18:48 np0005475493 nova_compute[262220]: 2025-10-08 10:18:48.926 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  8 06:18:48 np0005475493 nova_compute[262220]: 2025-10-08 10:18:48.927 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:18:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 06:18:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:18:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:48.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:18:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:18:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:18:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:18:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:18:49 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1027: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 81 op/s
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:18:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:18:49 np0005475493 podman[280086]: 2025-10-08 10:18:49.620278738 +0000 UTC m=+0.118812077 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  8 06:18:49 np0005475493 nova_compute[262220]: 2025-10-08 10:18:49.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:18:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:18:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:49.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:18:49 np0005475493 nova_compute[262220]: 2025-10-08 10:18:49.935 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:18:49 np0005475493 nova_compute[262220]: 2025-10-08 10:18:49.936 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:18:49 np0005475493 nova_compute[262220]: 2025-10-08 10:18:49.936 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:18:49 np0005475493 nova_compute[262220]: 2025-10-08 10:18:49.936 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:18:49 np0005475493 nova_compute[262220]: 2025-10-08 10:18:49.937 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:18:50 np0005475493 podman[280179]: 2025-10-08 10:18:49.94797346 +0000 UTC m=+0.024696258 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:18:50 np0005475493 podman[280179]: 2025-10-08 10:18:50.065171624 +0000 UTC m=+0.141894402 container create e66af8cdc9bb4a3a7ec529f8e4553dd90b7b7de519bdac859af6bd183c2c2045 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_boyd, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:18:50 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:50 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:50 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  8 06:18:50 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:50 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:50 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  8 06:18:50 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:18:50 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:50 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:50 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:18:50 np0005475493 systemd[1]: Started libpod-conmon-e66af8cdc9bb4a3a7ec529f8e4553dd90b7b7de519bdac859af6bd183c2c2045.scope.
Oct  8 06:18:50 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:18:50 np0005475493 ceph-mon[73572]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Oct  8 06:18:50 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:18:50 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/582462880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:18:50 np0005475493 podman[280179]: 2025-10-08 10:18:50.45154445 +0000 UTC m=+0.528267248 container init e66af8cdc9bb4a3a7ec529f8e4553dd90b7b7de519bdac859af6bd183c2c2045 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_boyd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:18:50 np0005475493 nova_compute[262220]: 2025-10-08 10:18:50.455 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:18:50 np0005475493 podman[280179]: 2025-10-08 10:18:50.459955132 +0000 UTC m=+0.536677910 container start e66af8cdc9bb4a3a7ec529f8e4553dd90b7b7de519bdac859af6bd183c2c2045 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_boyd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct  8 06:18:50 np0005475493 systemd[1]: libpod-e66af8cdc9bb4a3a7ec529f8e4553dd90b7b7de519bdac859af6bd183c2c2045.scope: Deactivated successfully.
Oct  8 06:18:50 np0005475493 peaceful_boyd[280216]: 167 167
Oct  8 06:18:50 np0005475493 conmon[280216]: conmon e66af8cdc9bb4a3a7ec5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e66af8cdc9bb4a3a7ec529f8e4553dd90b7b7de519bdac859af6bd183c2c2045.scope/container/memory.events
Oct  8 06:18:50 np0005475493 podman[280179]: 2025-10-08 10:18:50.612092925 +0000 UTC m=+0.688815713 container attach e66af8cdc9bb4a3a7ec529f8e4553dd90b7b7de519bdac859af6bd183c2c2045 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_boyd, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  8 06:18:50 np0005475493 podman[280179]: 2025-10-08 10:18:50.612739716 +0000 UTC m=+0.689462494 container died e66af8cdc9bb4a3a7ec529f8e4553dd90b7b7de519bdac859af6bd183c2c2045 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  8 06:18:50 np0005475493 nova_compute[262220]: 2025-10-08 10:18:50.636 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:18:50 np0005475493 nova_compute[262220]: 2025-10-08 10:18:50.639 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4558MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:18:50 np0005475493 nova_compute[262220]: 2025-10-08 10:18:50.639 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:18:50 np0005475493 nova_compute[262220]: 2025-10-08 10:18:50.639 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:18:50 np0005475493 nova_compute[262220]: 2025-10-08 10:18:50.703 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:18:50 np0005475493 nova_compute[262220]: 2025-10-08 10:18:50.704 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:18:50 np0005475493 nova_compute[262220]: 2025-10-08 10:18:50.722 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:18:50 np0005475493 systemd[1]: var-lib-containers-storage-overlay-f10d2577c33aaf17490b22058472f07eb4194ac0c193a7561f0b20fbebb4a537-merged.mount: Deactivated successfully.
Oct  8 06:18:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:50.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:18:51 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3983260049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:18:51 np0005475493 podman[280179]: 2025-10-08 10:18:51.261517195 +0000 UTC m=+1.338239973 container remove e66af8cdc9bb4a3a7ec529f8e4553dd90b7b7de519bdac859af6bd183c2c2045 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_boyd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:18:51 np0005475493 nova_compute[262220]: 2025-10-08 10:18:51.266 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:18:51 np0005475493 systemd[1]: libpod-conmon-e66af8cdc9bb4a3a7ec529f8e4553dd90b7b7de519bdac859af6bd183c2c2045.scope: Deactivated successfully.
Oct  8 06:18:51 np0005475493 nova_compute[262220]: 2025-10-08 10:18:51.277 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:18:51 np0005475493 ceph-mon[73572]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Oct  8 06:18:51 np0005475493 nova_compute[262220]: 2025-10-08 10:18:51.299 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:18:51 np0005475493 nova_compute[262220]: 2025-10-08 10:18:51.301 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:18:51 np0005475493 nova_compute[262220]: 2025-10-08 10:18:51.301 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:18:51 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1028: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 75 op/s
Oct  8 06:18:51 np0005475493 podman[280265]: 2025-10-08 10:18:51.514835736 +0000 UTC m=+0.117595238 container create 083778ccbe5b15ebf5a99309cdd92b692ada136b6070363edbda563b1770f0b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:18:51 np0005475493 podman[280265]: 2025-10-08 10:18:51.423453944 +0000 UTC m=+0.026213476 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:18:51 np0005475493 systemd[1]: Started libpod-conmon-083778ccbe5b15ebf5a99309cdd92b692ada136b6070363edbda563b1770f0b0.scope.
Oct  8 06:18:51 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:18:51 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a78981cf25d09b84d399a17eb3a832bd9612ca4f6cb8bc659f5f4b8705549a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:18:51 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a78981cf25d09b84d399a17eb3a832bd9612ca4f6cb8bc659f5f4b8705549a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:18:51 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a78981cf25d09b84d399a17eb3a832bd9612ca4f6cb8bc659f5f4b8705549a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:18:51 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a78981cf25d09b84d399a17eb3a832bd9612ca4f6cb8bc659f5f4b8705549a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:18:51 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a78981cf25d09b84d399a17eb3a832bd9612ca4f6cb8bc659f5f4b8705549a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:18:51 np0005475493 podman[280265]: 2025-10-08 10:18:51.772774355 +0000 UTC m=+0.375533887 container init 083778ccbe5b15ebf5a99309cdd92b692ada136b6070363edbda563b1770f0b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_beaver, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:18:51 np0005475493 podman[280265]: 2025-10-08 10:18:51.782095035 +0000 UTC m=+0.384854537 container start 083778ccbe5b15ebf5a99309cdd92b692ada136b6070363edbda563b1770f0b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_beaver, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Oct  8 06:18:51 np0005475493 podman[280265]: 2025-10-08 10:18:51.811183065 +0000 UTC m=+0.413942587 container attach 083778ccbe5b15ebf5a99309cdd92b692ada136b6070363edbda563b1770f0b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct  8 06:18:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:51.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:52 np0005475493 optimistic_beaver[280282]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:18:52 np0005475493 optimistic_beaver[280282]: --> All data devices are unavailable
Oct  8 06:18:52 np0005475493 systemd[1]: libpod-083778ccbe5b15ebf5a99309cdd92b692ada136b6070363edbda563b1770f0b0.scope: Deactivated successfully.
Oct  8 06:18:52 np0005475493 podman[280265]: 2025-10-08 10:18:52.165801236 +0000 UTC m=+0.768560758 container died 083778ccbe5b15ebf5a99309cdd92b692ada136b6070363edbda563b1770f0b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:18:52 np0005475493 systemd[1]: var-lib-containers-storage-overlay-04a78981cf25d09b84d399a17eb3a832bd9612ca4f6cb8bc659f5f4b8705549a-merged.mount: Deactivated successfully.
Oct  8 06:18:52 np0005475493 nova_compute[262220]: 2025-10-08 10:18:52.301 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:18:52 np0005475493 nova_compute[262220]: 2025-10-08 10:18:52.303 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  8 06:18:52 np0005475493 podman[280265]: 2025-10-08 10:18:52.398882192 +0000 UTC m=+1.001641694 container remove 083778ccbe5b15ebf5a99309cdd92b692ada136b6070363edbda563b1770f0b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_beaver, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:18:52 np0005475493 systemd[1]: libpod-conmon-083778ccbe5b15ebf5a99309cdd92b692ada136b6070363edbda563b1770f0b0.scope: Deactivated successfully.
Oct  8 06:18:52 np0005475493 nova_compute[262220]: 2025-10-08 10:18:52.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:18:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:52.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:53 np0005475493 podman[280400]: 2025-10-08 10:18:52.988197231 +0000 UTC m=+0.026477835 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:18:53 np0005475493 nova_compute[262220]: 2025-10-08 10:18:53.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:18:53 np0005475493 podman[280400]: 2025-10-08 10:18:53.324689297 +0000 UTC m=+0.362969881 container create b77294d51df80c52afb1e5d3fb24369e192ef7436af6be61f3f9efdd74529a55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_morse, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:18:53 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1029: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 75 op/s
Oct  8 06:18:53 np0005475493 systemd[1]: Started libpod-conmon-b77294d51df80c52afb1e5d3fb24369e192ef7436af6be61f3f9efdd74529a55.scope.
Oct  8 06:18:53 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:18:53 np0005475493 podman[280400]: 2025-10-08 10:18:53.579987761 +0000 UTC m=+0.618268375 container init b77294d51df80c52afb1e5d3fb24369e192ef7436af6be61f3f9efdd74529a55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:18:53 np0005475493 podman[280400]: 2025-10-08 10:18:53.587555695 +0000 UTC m=+0.625836279 container start b77294d51df80c52afb1e5d3fb24369e192ef7436af6be61f3f9efdd74529a55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  8 06:18:53 np0005475493 zealous_morse[280417]: 167 167
Oct  8 06:18:53 np0005475493 systemd[1]: libpod-b77294d51df80c52afb1e5d3fb24369e192ef7436af6be61f3f9efdd74529a55.scope: Deactivated successfully.
Oct  8 06:18:53 np0005475493 podman[280400]: 2025-10-08 10:18:53.594751728 +0000 UTC m=+0.633032332 container attach b77294d51df80c52afb1e5d3fb24369e192ef7436af6be61f3f9efdd74529a55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  8 06:18:53 np0005475493 podman[280400]: 2025-10-08 10:18:53.595609015 +0000 UTC m=+0.633889599 container died b77294d51df80c52afb1e5d3fb24369e192ef7436af6be61f3f9efdd74529a55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_morse, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct  8 06:18:53 np0005475493 systemd[1]: var-lib-containers-storage-overlay-0b4ee78217bacdce93a5b98c659bbc000db6e7584692c858a88ef759dd807d80-merged.mount: Deactivated successfully.
Oct  8 06:18:53 np0005475493 podman[280400]: 2025-10-08 10:18:53.696340388 +0000 UTC m=+0.734620972 container remove b77294d51df80c52afb1e5d3fb24369e192ef7436af6be61f3f9efdd74529a55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:18:53 np0005475493 systemd[1]: libpod-conmon-b77294d51df80c52afb1e5d3fb24369e192ef7436af6be61f3f9efdd74529a55.scope: Deactivated successfully.
Oct  8 06:18:53 np0005475493 podman[280441]: 2025-10-08 10:18:53.864221779 +0000 UTC m=+0.042312567 container create f85f57a07af7a7b1ecde2e7048c6394d8559cd10bb0199565c604225bcb2edab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_spence, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  8 06:18:53 np0005475493 nova_compute[262220]: 2025-10-08 10:18:53.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:18:53 np0005475493 systemd[1]: Started libpod-conmon-f85f57a07af7a7b1ecde2e7048c6394d8559cd10bb0199565c604225bcb2edab.scope.
Oct  8 06:18:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:53.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:53 np0005475493 podman[280441]: 2025-10-08 10:18:53.845311828 +0000 UTC m=+0.023402636 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:18:53 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:18:53 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf6fcc220f013fb3ae29c618c5ef1a851a9739ec84df7c4300ae7927fcb3949/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:18:53 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf6fcc220f013fb3ae29c618c5ef1a851a9739ec84df7c4300ae7927fcb3949/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:18:53 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf6fcc220f013fb3ae29c618c5ef1a851a9739ec84df7c4300ae7927fcb3949/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:18:53 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf6fcc220f013fb3ae29c618c5ef1a851a9739ec84df7c4300ae7927fcb3949/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:18:53 np0005475493 podman[280441]: 2025-10-08 10:18:53.970732328 +0000 UTC m=+0.148823126 container init f85f57a07af7a7b1ecde2e7048c6394d8559cd10bb0199565c604225bcb2edab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_spence, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct  8 06:18:53 np0005475493 podman[280441]: 2025-10-08 10:18:53.978178519 +0000 UTC m=+0.156269297 container start f85f57a07af7a7b1ecde2e7048c6394d8559cd10bb0199565c604225bcb2edab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  8 06:18:53 np0005475493 podman[280441]: 2025-10-08 10:18:53.982223939 +0000 UTC m=+0.160314747 container attach f85f57a07af7a7b1ecde2e7048c6394d8559cd10bb0199565c604225bcb2edab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_spence, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:18:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:18:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:18:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:18:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]: {
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:    "1": [
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:        {
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:            "devices": [
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:                "/dev/loop3"
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:            ],
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:            "lv_name": "ceph_lv0",
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:            "lv_size": "21470642176",
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:            "name": "ceph_lv0",
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:            "tags": {
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:                "ceph.cluster_name": "ceph",
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:                "ceph.crush_device_class": "",
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:                "ceph.encrypted": "0",
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:                "ceph.osd_id": "1",
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:                "ceph.type": "block",
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:                "ceph.vdo": "0",
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:                "ceph.with_tpm": "0"
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:            },
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:            "type": "block",
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:            "vg_name": "ceph_vg0"
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:        }
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]:    ]
Oct  8 06:18:54 np0005475493 peaceful_spence[280458]: }
Oct  8 06:18:54 np0005475493 systemd[1]: libpod-f85f57a07af7a7b1ecde2e7048c6394d8559cd10bb0199565c604225bcb2edab.scope: Deactivated successfully.
Oct  8 06:18:54 np0005475493 podman[280441]: 2025-10-08 10:18:54.276414169 +0000 UTC m=+0.454504967 container died f85f57a07af7a7b1ecde2e7048c6394d8559cd10bb0199565c604225bcb2edab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_spence, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:18:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:18:54 np0005475493 systemd[1]: var-lib-containers-storage-overlay-6cf6fcc220f013fb3ae29c618c5ef1a851a9739ec84df7c4300ae7927fcb3949-merged.mount: Deactivated successfully.
Oct  8 06:18:54 np0005475493 podman[280441]: 2025-10-08 10:18:54.338797233 +0000 UTC m=+0.516888011 container remove f85f57a07af7a7b1ecde2e7048c6394d8559cd10bb0199565c604225bcb2edab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:18:54 np0005475493 systemd[1]: libpod-conmon-f85f57a07af7a7b1ecde2e7048c6394d8559cd10bb0199565c604225bcb2edab.scope: Deactivated successfully.
Oct  8 06:18:54 np0005475493 podman[280573]: 2025-10-08 10:18:54.920421214 +0000 UTC m=+0.036956044 container create 98a36cf99b62017bfd10d393678b9b69d8892b18ca76cc077ba8eadeead9a5da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wescoff, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:18:54 np0005475493 systemd[1]: Started libpod-conmon-98a36cf99b62017bfd10d393678b9b69d8892b18ca76cc077ba8eadeead9a5da.scope.
Oct  8 06:18:54 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:18:55 np0005475493 podman[280573]: 2025-10-08 10:18:54.905553345 +0000 UTC m=+0.022088195 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:18:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:18:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:55.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:18:55 np0005475493 podman[280573]: 2025-10-08 10:18:55.005121909 +0000 UTC m=+0.121656799 container init 98a36cf99b62017bfd10d393678b9b69d8892b18ca76cc077ba8eadeead9a5da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct  8 06:18:55 np0005475493 podman[280573]: 2025-10-08 10:18:55.016191777 +0000 UTC m=+0.132726597 container start 98a36cf99b62017bfd10d393678b9b69d8892b18ca76cc077ba8eadeead9a5da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:18:55 np0005475493 podman[280573]: 2025-10-08 10:18:55.019962639 +0000 UTC m=+0.136497579 container attach 98a36cf99b62017bfd10d393678b9b69d8892b18ca76cc077ba8eadeead9a5da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wescoff, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:18:55 np0005475493 romantic_wescoff[280589]: 167 167
Oct  8 06:18:55 np0005475493 systemd[1]: libpod-98a36cf99b62017bfd10d393678b9b69d8892b18ca76cc077ba8eadeead9a5da.scope: Deactivated successfully.
Oct  8 06:18:55 np0005475493 podman[280573]: 2025-10-08 10:18:55.022596204 +0000 UTC m=+0.139131044 container died 98a36cf99b62017bfd10d393678b9b69d8892b18ca76cc077ba8eadeead9a5da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct  8 06:18:55 np0005475493 systemd[1]: var-lib-containers-storage-overlay-3cb75991f0aa03e0cb433af04209c40d6304384a53b59439bfc624eb4fe2506b-merged.mount: Deactivated successfully.
Oct  8 06:18:55 np0005475493 podman[280573]: 2025-10-08 10:18:55.068427443 +0000 UTC m=+0.184962273 container remove 98a36cf99b62017bfd10d393678b9b69d8892b18ca76cc077ba8eadeead9a5da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wescoff, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  8 06:18:55 np0005475493 systemd[1]: libpod-conmon-98a36cf99b62017bfd10d393678b9b69d8892b18ca76cc077ba8eadeead9a5da.scope: Deactivated successfully.
Oct  8 06:18:55 np0005475493 podman[280614]: 2025-10-08 10:18:55.232008296 +0000 UTC m=+0.045545472 container create f5ebc85e516d4ea8e99874c20940ccf82cdbcc8f0a92239c8cc729edd24ede2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:18:55 np0005475493 systemd[1]: Started libpod-conmon-f5ebc85e516d4ea8e99874c20940ccf82cdbcc8f0a92239c8cc729edd24ede2f.scope.
Oct  8 06:18:55 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:18:55.285 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:18:55 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:18:55 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ab2ca7e1c0ecb88920b5a825ced396aa6ca3db9eced323c6a05c325eeccd41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:18:55 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ab2ca7e1c0ecb88920b5a825ced396aa6ca3db9eced323c6a05c325eeccd41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:18:55 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ab2ca7e1c0ecb88920b5a825ced396aa6ca3db9eced323c6a05c325eeccd41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:18:55 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ab2ca7e1c0ecb88920b5a825ced396aa6ca3db9eced323c6a05c325eeccd41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:18:55 np0005475493 podman[280614]: 2025-10-08 10:18:55.213280071 +0000 UTC m=+0.026817267 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:18:55 np0005475493 podman[280614]: 2025-10-08 10:18:55.309844979 +0000 UTC m=+0.123382175 container init f5ebc85e516d4ea8e99874c20940ccf82cdbcc8f0a92239c8cc729edd24ede2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_goldwasser, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:18:55 np0005475493 podman[280614]: 2025-10-08 10:18:55.318187548 +0000 UTC m=+0.131724724 container start f5ebc85e516d4ea8e99874c20940ccf82cdbcc8f0a92239c8cc729edd24ede2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  8 06:18:55 np0005475493 podman[280614]: 2025-10-08 10:18:55.321347771 +0000 UTC m=+0.134884957 container attach f5ebc85e516d4ea8e99874c20940ccf82cdbcc8f0a92239c8cc729edd24ede2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:18:55 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1030: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 380 KiB/s rd, 2.3 MiB/s wr, 66 op/s
Oct  8 06:18:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:18:55] "GET /metrics HTTP/1.1" 200 48470 "" "Prometheus/2.51.0"
Oct  8 06:18:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:18:55] "GET /metrics HTTP/1.1" 200 48470 "" "Prometheus/2.51.0"
Oct  8 06:18:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:55.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:55 np0005475493 lvm[280720]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:18:55 np0005475493 lvm[280720]: VG ceph_vg0 finished
Oct  8 06:18:56 np0005475493 podman[280706]: 2025-10-08 10:18:56.011432313 +0000 UTC m=+0.064815673 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible)
Oct  8 06:18:56 np0005475493 podman[280705]: 2025-10-08 10:18:56.011761464 +0000 UTC m=+0.065876318 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd)
Oct  8 06:18:56 np0005475493 blissful_goldwasser[280631]: {}
Oct  8 06:18:56 np0005475493 systemd[1]: libpod-f5ebc85e516d4ea8e99874c20940ccf82cdbcc8f0a92239c8cc729edd24ede2f.scope: Deactivated successfully.
Oct  8 06:18:56 np0005475493 systemd[1]: libpod-f5ebc85e516d4ea8e99874c20940ccf82cdbcc8f0a92239c8cc729edd24ede2f.scope: Consumed 1.175s CPU time.
Oct  8 06:18:56 np0005475493 podman[280614]: 2025-10-08 10:18:56.055617271 +0000 UTC m=+0.869154467 container died f5ebc85e516d4ea8e99874c20940ccf82cdbcc8f0a92239c8cc729edd24ede2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_goldwasser, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  8 06:18:56 np0005475493 systemd[1]: var-lib-containers-storage-overlay-99ab2ca7e1c0ecb88920b5a825ced396aa6ca3db9eced323c6a05c325eeccd41-merged.mount: Deactivated successfully.
Oct  8 06:18:56 np0005475493 podman[280614]: 2025-10-08 10:18:56.112407694 +0000 UTC m=+0.925944900 container remove f5ebc85e516d4ea8e99874c20940ccf82cdbcc8f0a92239c8cc729edd24ede2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_goldwasser, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:18:56 np0005475493 systemd[1]: libpod-conmon-f5ebc85e516d4ea8e99874c20940ccf82cdbcc8f0a92239c8cc729edd24ede2f.scope: Deactivated successfully.
Oct  8 06:18:56 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:18:56 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:56 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:18:56 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:57.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:18:57.181Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:18:57 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:57 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:18:57 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1031: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 380 KiB/s rd, 2.3 MiB/s wr, 66 op/s
Oct  8 06:18:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:18:57.417 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:18:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:18:57.417 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:18:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:18:57.417 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:18:57 np0005475493 nova_compute[262220]: 2025-10-08 10:18:57.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:18:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:57.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:58 np0005475493 nova_compute[262220]: 2025-10-08 10:18:58.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:18:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:18:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:18:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:18:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:18:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:18:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:18:59.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:18:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:18:59 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1032: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 423 KiB/s rd, 2.3 MiB/s wr, 74 op/s
Oct  8 06:18:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:18:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:18:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:18:59.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:01.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:01 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1033: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct  8 06:19:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:01.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:02 np0005475493 nova_compute[262220]: 2025-10-08 10:19:02.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Oct  8 06:19:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:19:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:19:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:19:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:03.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:03 np0005475493 nova_compute[262220]: 2025-10-08 10:19:03.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:03 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1034: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct  8 06:19:03 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:19:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:03.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:19:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:19:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:19:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:19:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:19:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:19:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:05.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:19:05 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1035: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct  8 06:19:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:19:05] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct  8 06:19:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:19:05] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct  8 06:19:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:05.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:07.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:19:07.183Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:19:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:19:07.183Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:19:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:19:07.183Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:19:07 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1036: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 22 KiB/s wr, 7 op/s
Oct  8 06:19:07 np0005475493 nova_compute[262220]: 2025-10-08 10:19:07.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:19:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:07.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:19:08 np0005475493 nova_compute[262220]: 2025-10-08 10:19:08.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:08 np0005475493 podman[280800]: 2025-10-08 10:19:08.931804572 +0000 UTC m=+0.084805439 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  8 06:19:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:19:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:19:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:19:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:19:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:09.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:19:09 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1037: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 23 KiB/s wr, 35 op/s
Oct  8 06:19:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:19:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:09.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:19:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:19:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:11.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:19:11 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1038: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 29 op/s
Oct  8 06:19:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:19:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:11.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:19:12 np0005475493 nova_compute[262220]: 2025-10-08 10:19:12.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:19:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:13.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:19:13 np0005475493 nova_compute[262220]: 2025-10-08 10:19:13.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:13 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1039: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 29 op/s
Oct  8 06:19:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:13.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:19:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:19:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:19:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:19:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:19:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:15.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:15 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1040: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 29 op/s
Oct  8 06:19:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:19:15] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct  8 06:19:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:19:15] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct  8 06:19:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:15.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:19:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:17.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:19:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:19:17.185Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:19:17 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1041: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct  8 06:19:17 np0005475493 nova_compute[262220]: 2025-10-08 10:19:17.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:19:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:19:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:19:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:19:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:19:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:17.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:19:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:19:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:19:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:19:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:19:18 np0005475493 nova_compute[262220]: 2025-10-08 10:19:18.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:19:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:19:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:19:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:19:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:19:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:19.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:19:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:19:19 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1042: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct  8 06:19:19 np0005475493 podman[280857]: 2025-10-08 10:19:19.927179572 +0000 UTC m=+0.084561192 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:19:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:19:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:19.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:19:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:19:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:21.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:19:21 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1043: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:19:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:21.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:22 np0005475493 nova_compute[262220]: 2025-10-08 10:19:22.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:19:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:23.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:19:23 np0005475493 nova_compute[262220]: 2025-10-08 10:19:23.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:23 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1044: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:19:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:19:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:23.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:19:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:19:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:19:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:19:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:19:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:19:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:19:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:25.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:19:25 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1045: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:19:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:19:25] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct  8 06:19:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:19:25] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct  8 06:19:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:25.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:26 np0005475493 nova_compute[262220]: 2025-10-08 10:19:26.852 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:19:26 np0005475493 nova_compute[262220]: 2025-10-08 10:19:26.852 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:19:26 np0005475493 podman[280891]: 2025-10-08 10:19:26.908970131 +0000 UTC m=+0.058511011 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  8 06:19:26 np0005475493 nova_compute[262220]: 2025-10-08 10:19:26.910 2 DEBUG nova.compute.manager [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  8 06:19:26 np0005475493 podman[280890]: 2025-10-08 10:19:26.939966001 +0000 UTC m=+0.089538602 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd)
Oct  8 06:19:27 np0005475493 nova_compute[262220]: 2025-10-08 10:19:27.034 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:19:27 np0005475493 nova_compute[262220]: 2025-10-08 10:19:27.035 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:19:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:19:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:27.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:19:27 np0005475493 nova_compute[262220]: 2025-10-08 10:19:27.041 2 DEBUG nova.virt.hardware [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  8 06:19:27 np0005475493 nova_compute[262220]: 2025-10-08 10:19:27.042 2 INFO nova.compute.claims [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  8 06:19:27 np0005475493 nova_compute[262220]: 2025-10-08 10:19:27.172 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:19:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:19:27.185Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:19:27 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1046: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:19:27 np0005475493 nova_compute[262220]: 2025-10-08 10:19:27.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:27 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:19:27 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3324795878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:19:27 np0005475493 nova_compute[262220]: 2025-10-08 10:19:27.645 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:19:27 np0005475493 nova_compute[262220]: 2025-10-08 10:19:27.651 2 DEBUG nova.compute.provider_tree [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:19:27 np0005475493 nova_compute[262220]: 2025-10-08 10:19:27.670 2 DEBUG nova.scheduler.client.report [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:19:27 np0005475493 nova_compute[262220]: 2025-10-08 10:19:27.706 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:19:27 np0005475493 nova_compute[262220]: 2025-10-08 10:19:27.707 2 DEBUG nova.compute.manager [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  8 06:19:27 np0005475493 nova_compute[262220]: 2025-10-08 10:19:27.752 2 DEBUG nova.compute.manager [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  8 06:19:27 np0005475493 nova_compute[262220]: 2025-10-08 10:19:27.753 2 DEBUG nova.network.neutron [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  8 06:19:27 np0005475493 nova_compute[262220]: 2025-10-08 10:19:27.770 2 INFO nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  8 06:19:27 np0005475493 nova_compute[262220]: 2025-10-08 10:19:27.789 2 DEBUG nova.compute.manager [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  8 06:19:27 np0005475493 nova_compute[262220]: 2025-10-08 10:19:27.881 2 DEBUG nova.compute.manager [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  8 06:19:27 np0005475493 nova_compute[262220]: 2025-10-08 10:19:27.883 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  8 06:19:27 np0005475493 nova_compute[262220]: 2025-10-08 10:19:27.884 2 INFO nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Creating image(s)#033[00m
Oct  8 06:19:27 np0005475493 nova_compute[262220]: 2025-10-08 10:19:27.924 2 DEBUG nova.storage.rbd_utils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:19:27 np0005475493 nova_compute[262220]: 2025-10-08 10:19:27.960 2 DEBUG nova.storage.rbd_utils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:19:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:27.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:27 np0005475493 nova_compute[262220]: 2025-10-08 10:19:27.993 2 DEBUG nova.storage.rbd_utils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:19:27 np0005475493 nova_compute[262220]: 2025-10-08 10:19:27.996 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:19:28 np0005475493 nova_compute[262220]: 2025-10-08 10:19:28.053 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:19:28 np0005475493 nova_compute[262220]: 2025-10-08 10:19:28.054 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "3cde70359534d4758cf71011630bd1fb14a90c92" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:19:28 np0005475493 nova_compute[262220]: 2025-10-08 10:19:28.055 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "3cde70359534d4758cf71011630bd1fb14a90c92" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:19:28 np0005475493 nova_compute[262220]: 2025-10-08 10:19:28.056 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "3cde70359534d4758cf71011630bd1fb14a90c92" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:19:28 np0005475493 nova_compute[262220]: 2025-10-08 10:19:28.086 2 DEBUG nova.storage.rbd_utils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:19:28 np0005475493 nova_compute[262220]: 2025-10-08 10:19:28.089 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:19:28 np0005475493 nova_compute[262220]: 2025-10-08 10:19:28.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:28 np0005475493 nova_compute[262220]: 2025-10-08 10:19:28.350 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:19:28 np0005475493 nova_compute[262220]: 2025-10-08 10:19:28.430 2 DEBUG nova.storage.rbd_utils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] resizing rbd image 7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  8 06:19:28 np0005475493 nova_compute[262220]: 2025-10-08 10:19:28.535 2 DEBUG nova.policy [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd50b19166a7245e390a6e29682191263', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  8 06:19:28 np0005475493 nova_compute[262220]: 2025-10-08 10:19:28.541 2 DEBUG nova.objects.instance [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'migration_context' on Instance uuid 7d19d2c6-6de1-4096-99e4-24b4265b9c09 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  8 06:19:28 np0005475493 nova_compute[262220]: 2025-10-08 10:19:28.646 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  8 06:19:28 np0005475493 nova_compute[262220]: 2025-10-08 10:19:28.646 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Ensure instance console log exists: /var/lib/nova/instances/7d19d2c6-6de1-4096-99e4-24b4265b9c09/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  8 06:19:28 np0005475493 nova_compute[262220]: 2025-10-08 10:19:28.647 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:19:28 np0005475493 nova_compute[262220]: 2025-10-08 10:19:28.647 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:19:28 np0005475493 nova_compute[262220]: 2025-10-08 10:19:28.647 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:19:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:19:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:19:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:19:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:19:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:19:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:29.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:19:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:19:29 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1047: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  8 06:19:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:19:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:29.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:19:30 np0005475493 nova_compute[262220]: 2025-10-08 10:19:30.724 2 DEBUG nova.network.neutron [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Successfully created port: 29abf06b-1e1a-46cb-9cc1-7fa777795883 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  8 06:19:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:31.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:31 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1048: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  8 06:19:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:31.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:32 np0005475493 nova_compute[262220]: 2025-10-08 10:19:32.511 2 DEBUG nova.network.neutron [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Successfully updated port: 29abf06b-1e1a-46cb-9cc1-7fa777795883 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  8 06:19:32 np0005475493 nova_compute[262220]: 2025-10-08 10:19:32.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:32 np0005475493 nova_compute[262220]: 2025-10-08 10:19:32.531 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:19:32 np0005475493 nova_compute[262220]: 2025-10-08 10:19:32.531 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquired lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:19:32 np0005475493 nova_compute[262220]: 2025-10-08 10:19:32.531 2 DEBUG nova.network.neutron [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  8 06:19:32 np0005475493 nova_compute[262220]: 2025-10-08 10:19:32.667 2 DEBUG nova.compute.manager [req-4e2d0368-0e8a-489f-aa2c-93dd4a7774e4 req-7dc0b697-ecc2-4c17-976e-63662e534cb4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-changed-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:19:32 np0005475493 nova_compute[262220]: 2025-10-08 10:19:32.667 2 DEBUG nova.compute.manager [req-4e2d0368-0e8a-489f-aa2c-93dd4a7774e4 req-7dc0b697-ecc2-4c17-976e-63662e534cb4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Refreshing instance network info cache due to event network-changed-29abf06b-1e1a-46cb-9cc1-7fa777795883. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  8 06:19:32 np0005475493 nova_compute[262220]: 2025-10-08 10:19:32.668 2 DEBUG oslo_concurrency.lockutils [req-4e2d0368-0e8a-489f-aa2c-93dd4a7774e4 req-7dc0b697-ecc2-4c17-976e-63662e534cb4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:19:32 np0005475493 nova_compute[262220]: 2025-10-08 10:19:32.781 2 DEBUG nova.network.neutron [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  8 06:19:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:19:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:19:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:19:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:33.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:19:33 np0005475493 nova_compute[262220]: 2025-10-08 10:19:33.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:33 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1049: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  8 06:19:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:33.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:19:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:19:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:19:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:19:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:19:34 np0005475493 nova_compute[262220]: 2025-10-08 10:19:34.485 2 DEBUG nova.network.neutron [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updating instance_info_cache with network_info: [{"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:19:34 np0005475493 nova_compute[262220]: 2025-10-08 10:19:34.507 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Releasing lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:19:34 np0005475493 nova_compute[262220]: 2025-10-08 10:19:34.507 2 DEBUG nova.compute.manager [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Instance network_info: |[{"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  8 06:19:34 np0005475493 nova_compute[262220]: 2025-10-08 10:19:34.508 2 DEBUG oslo_concurrency.lockutils [req-4e2d0368-0e8a-489f-aa2c-93dd4a7774e4 req-7dc0b697-ecc2-4c17-976e-63662e534cb4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:19:34 np0005475493 nova_compute[262220]: 2025-10-08 10:19:34.508 2 DEBUG nova.network.neutron [req-4e2d0368-0e8a-489f-aa2c-93dd4a7774e4 req-7dc0b697-ecc2-4c17-976e-63662e534cb4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Refreshing network info cache for port 29abf06b-1e1a-46cb-9cc1-7fa777795883 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  8 06:19:34 np0005475493 nova_compute[262220]: 2025-10-08 10:19:34.511 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Start _get_guest_xml network_info=[{"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T10:08:49Z,direct_url=<?>,disk_format='qcow2',id=e5994bac-385d-4cfe-962e-386aa0559983,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9bebada0871a4efa9df99c6beff34c13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T10:08:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'device_type': 'disk', 'size': 0, 'image_id': 'e5994bac-385d-4cfe-962e-386aa0559983'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  8 06:19:34 np0005475493 nova_compute[262220]: 2025-10-08 10:19:34.516 2 WARNING nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:19:34 np0005475493 nova_compute[262220]: 2025-10-08 10:19:34.521 2 DEBUG nova.virt.libvirt.host [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  8 06:19:34 np0005475493 nova_compute[262220]: 2025-10-08 10:19:34.522 2 DEBUG nova.virt.libvirt.host [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  8 06:19:34 np0005475493 nova_compute[262220]: 2025-10-08 10:19:34.525 2 DEBUG nova.virt.libvirt.host [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  8 06:19:34 np0005475493 nova_compute[262220]: 2025-10-08 10:19:34.525 2 DEBUG nova.virt.libvirt.host [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  8 06:19:34 np0005475493 nova_compute[262220]: 2025-10-08 10:19:34.526 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  8 06:19:34 np0005475493 nova_compute[262220]: 2025-10-08 10:19:34.526 2 DEBUG nova.virt.hardware [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-08T10:08:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='461f98d6-ae65-4f86-8ae2-cc3cfaea2a46',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T10:08:49Z,direct_url=<?>,disk_format='qcow2',id=e5994bac-385d-4cfe-962e-386aa0559983,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9bebada0871a4efa9df99c6beff34c13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T10:08:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  8 06:19:34 np0005475493 nova_compute[262220]: 2025-10-08 10:19:34.526 2 DEBUG nova.virt.hardware [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  8 06:19:34 np0005475493 nova_compute[262220]: 2025-10-08 10:19:34.527 2 DEBUG nova.virt.hardware [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  8 06:19:34 np0005475493 nova_compute[262220]: 2025-10-08 10:19:34.527 2 DEBUG nova.virt.hardware [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  8 06:19:34 np0005475493 nova_compute[262220]: 2025-10-08 10:19:34.527 2 DEBUG nova.virt.hardware [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  8 06:19:34 np0005475493 nova_compute[262220]: 2025-10-08 10:19:34.527 2 DEBUG nova.virt.hardware [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  8 06:19:34 np0005475493 nova_compute[262220]: 2025-10-08 10:19:34.527 2 DEBUG nova.virt.hardware [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  8 06:19:34 np0005475493 nova_compute[262220]: 2025-10-08 10:19:34.528 2 DEBUG nova.virt.hardware [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  8 06:19:34 np0005475493 nova_compute[262220]: 2025-10-08 10:19:34.528 2 DEBUG nova.virt.hardware [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  8 06:19:34 np0005475493 nova_compute[262220]: 2025-10-08 10:19:34.528 2 DEBUG nova.virt.hardware [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  8 06:19:34 np0005475493 nova_compute[262220]: 2025-10-08 10:19:34.528 2 DEBUG nova.virt.hardware [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  8 06:19:34 np0005475493 nova_compute[262220]: 2025-10-08 10:19:34.531 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:19:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  8 06:19:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/494934549' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  8 06:19:34 np0005475493 nova_compute[262220]: 2025-10-08 10:19:34.976 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.006 2 DEBUG nova.storage.rbd_utils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.012 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:19:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:35.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:35 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1050: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  8 06:19:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  8 06:19:35 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4057436752' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.584 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.586 2 DEBUG nova.virt.libvirt.vif [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-08T10:19:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1442491120',display_name='tempest-TestNetworkBasicOps-server-1442491120',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1442491120',id=11,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA5zqA1Qj/FXMxdyzpBTW0ZXp5DxknDQcIVK3ARN25T6VayPziIvkKCLWAtPemraMv4byPsH7lpRR4PeiITQ6eibmU22T/5fhhxWj1Ai2d949LVQyVHFvTo1rGRRAeVdbw==',key_name='tempest-TestNetworkBasicOps-1126023314',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-zjf5kwx6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T10:19:27Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=7d19d2c6-6de1-4096-99e4-24b4265b9c09,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.587 2 DEBUG nova.network.os_vif_util [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.589 2 DEBUG nova.network.os_vif_util [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:0d:2d,bridge_name='br-int',has_traffic_filtering=True,id=29abf06b-1e1a-46cb-9cc1-7fa777795883,network=Network(c18c7476-aaa8-4977-81b5-fb17e88446e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29abf06b-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.590 2 DEBUG nova.objects.instance [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'pci_devices' on Instance uuid 7d19d2c6-6de1-4096-99e4-24b4265b9c09 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.609 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] End _get_guest_xml xml=<domain type="kvm">
Oct  8 06:19:35 np0005475493 nova_compute[262220]:  <uuid>7d19d2c6-6de1-4096-99e4-24b4265b9c09</uuid>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:  <name>instance-0000000b</name>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:  <memory>131072</memory>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:  <vcpu>1</vcpu>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:  <metadata>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <nova:name>tempest-TestNetworkBasicOps-server-1442491120</nova:name>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <nova:creationTime>2025-10-08 10:19:34</nova:creationTime>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <nova:flavor name="m1.nano">
Oct  8 06:19:35 np0005475493 nova_compute[262220]:        <nova:memory>128</nova:memory>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:        <nova:disk>1</nova:disk>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:        <nova:swap>0</nova:swap>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:        <nova:ephemeral>0</nova:ephemeral>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:        <nova:vcpus>1</nova:vcpus>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      </nova:flavor>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <nova:owner>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:        <nova:user uuid="d50b19166a7245e390a6e29682191263">tempest-TestNetworkBasicOps-139500885-project-member</nova:user>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:        <nova:project uuid="0edb2c85b88b4b168a4b2e8c5ed4a05c">tempest-TestNetworkBasicOps-139500885</nova:project>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      </nova:owner>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <nova:root type="image" uuid="e5994bac-385d-4cfe-962e-386aa0559983"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <nova:ports>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:        <nova:port uuid="29abf06b-1e1a-46cb-9cc1-7fa777795883">
Oct  8 06:19:35 np0005475493 nova_compute[262220]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:        </nova:port>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      </nova:ports>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    </nova:instance>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:  </metadata>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:  <sysinfo type="smbios">
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <system>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <entry name="manufacturer">RDO</entry>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <entry name="product">OpenStack Compute</entry>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <entry name="serial">7d19d2c6-6de1-4096-99e4-24b4265b9c09</entry>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <entry name="uuid">7d19d2c6-6de1-4096-99e4-24b4265b9c09</entry>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <entry name="family">Virtual Machine</entry>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    </system>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:  </sysinfo>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:  <os>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <boot dev="hd"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <smbios mode="sysinfo"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:  </os>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:  <features>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <acpi/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <apic/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <vmcoreinfo/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:  </features>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:  <clock offset="utc">
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <timer name="pit" tickpolicy="delay"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <timer name="hpet" present="no"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:  </clock>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:  <cpu mode="host-model" match="exact">
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <topology sockets="1" cores="1" threads="1"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:  </cpu>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:  <devices>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <disk type="network" device="disk">
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <driver type="raw" cache="none"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <source protocol="rbd" name="vms/7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk">
Oct  8 06:19:35 np0005475493 nova_compute[262220]:        <host name="192.168.122.100" port="6789"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:        <host name="192.168.122.102" port="6789"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:        <host name="192.168.122.101" port="6789"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      </source>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <auth username="openstack">
Oct  8 06:19:35 np0005475493 nova_compute[262220]:        <secret type="ceph" uuid="787292cc-8154-50c4-9e00-e9be3e817149"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      </auth>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <target dev="vda" bus="virtio"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    </disk>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <disk type="network" device="cdrom">
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <driver type="raw" cache="none"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <source protocol="rbd" name="vms/7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk.config">
Oct  8 06:19:35 np0005475493 nova_compute[262220]:        <host name="192.168.122.100" port="6789"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:        <host name="192.168.122.102" port="6789"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:        <host name="192.168.122.101" port="6789"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      </source>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <auth username="openstack">
Oct  8 06:19:35 np0005475493 nova_compute[262220]:        <secret type="ceph" uuid="787292cc-8154-50c4-9e00-e9be3e817149"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      </auth>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <target dev="sda" bus="sata"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    </disk>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <interface type="ethernet">
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <mac address="fa:16:3e:00:0d:2d"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <model type="virtio"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <driver name="vhost" rx_queue_size="512"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <mtu size="1442"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <target dev="tap29abf06b-1e"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    </interface>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <serial type="pty">
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <log file="/var/lib/nova/instances/7d19d2c6-6de1-4096-99e4-24b4265b9c09/console.log" append="off"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    </serial>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <video>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <model type="virtio"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    </video>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <input type="tablet" bus="usb"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <rng model="virtio">
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <backend model="random">/dev/urandom</backend>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    </rng>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <controller type="usb" index="0"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    <memballoon model="virtio">
Oct  8 06:19:35 np0005475493 nova_compute[262220]:      <stats period="10"/>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:    </memballoon>
Oct  8 06:19:35 np0005475493 nova_compute[262220]:  </devices>
Oct  8 06:19:35 np0005475493 nova_compute[262220]: </domain>
Oct  8 06:19:35 np0005475493 nova_compute[262220]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.611 2 DEBUG nova.compute.manager [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Preparing to wait for external event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.611 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.611 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.612 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.613 2 DEBUG nova.virt.libvirt.vif [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-08T10:19:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1442491120',display_name='tempest-TestNetworkBasicOps-server-1442491120',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1442491120',id=11,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA5zqA1Qj/FXMxdyzpBTW0ZXp5DxknDQcIVK3ARN25T6VayPziIvkKCLWAtPemraMv4byPsH7lpRR4PeiITQ6eibmU22T/5fhhxWj1Ai2d949LVQyVHFvTo1rGRRAeVdbw==',key_name='tempest-TestNetworkBasicOps-1126023314',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-zjf5kwx6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T10:19:27Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=7d19d2c6-6de1-4096-99e4-24b4265b9c09,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.613 2 DEBUG nova.network.os_vif_util [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.614 2 DEBUG nova.network.os_vif_util [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:0d:2d,bridge_name='br-int',has_traffic_filtering=True,id=29abf06b-1e1a-46cb-9cc1-7fa777795883,network=Network(c18c7476-aaa8-4977-81b5-fb17e88446e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29abf06b-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.614 2 DEBUG os_vif [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:0d:2d,bridge_name='br-int',has_traffic_filtering=True,id=29abf06b-1e1a-46cb-9cc1-7fa777795883,network=Network(c18c7476-aaa8-4977-81b5-fb17e88446e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29abf06b-1e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.621 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29abf06b-1e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.621 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap29abf06b-1e, col_values=(('external_ids', {'iface-id': '29abf06b-1e1a-46cb-9cc1-7fa777795883', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:0d:2d', 'vm-uuid': '7d19d2c6-6de1-4096-99e4-24b4265b9c09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:35 np0005475493 NetworkManager[44872]: <info>  [1759918775.6241] manager: (tap29abf06b-1e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.632 2 INFO os_vif [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:0d:2d,bridge_name='br-int',has_traffic_filtering=True,id=29abf06b-1e1a-46cb-9cc1-7fa777795883,network=Network(c18c7476-aaa8-4977-81b5-fb17e88446e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29abf06b-1e')#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.696 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.697 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.697 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No VIF found with MAC fa:16:3e:00:0d:2d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.698 2 INFO nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Using config drive#033[00m
Oct  8 06:19:35 np0005475493 nova_compute[262220]: 2025-10-08 10:19:35.731 2 DEBUG nova.storage.rbd_utils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:19:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:19:35] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Oct  8 06:19:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:19:35] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Oct  8 06:19:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:19:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:35.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:19:36 np0005475493 nova_compute[262220]: 2025-10-08 10:19:36.059 2 DEBUG nova.network.neutron [req-4e2d0368-0e8a-489f-aa2c-93dd4a7774e4 req-7dc0b697-ecc2-4c17-976e-63662e534cb4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updated VIF entry in instance network info cache for port 29abf06b-1e1a-46cb-9cc1-7fa777795883. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  8 06:19:36 np0005475493 nova_compute[262220]: 2025-10-08 10:19:36.060 2 DEBUG nova.network.neutron [req-4e2d0368-0e8a-489f-aa2c-93dd4a7774e4 req-7dc0b697-ecc2-4c17-976e-63662e534cb4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updating instance_info_cache with network_info: [{"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:19:36 np0005475493 nova_compute[262220]: 2025-10-08 10:19:36.076 2 DEBUG oslo_concurrency.lockutils [req-4e2d0368-0e8a-489f-aa2c-93dd4a7774e4 req-7dc0b697-ecc2-4c17-976e-63662e534cb4 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:19:36 np0005475493 nova_compute[262220]: 2025-10-08 10:19:36.436 2 INFO nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Creating config drive at /var/lib/nova/instances/7d19d2c6-6de1-4096-99e4-24b4265b9c09/disk.config#033[00m
Oct  8 06:19:36 np0005475493 nova_compute[262220]: 2025-10-08 10:19:36.441 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7d19d2c6-6de1-4096-99e4-24b4265b9c09/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0qvqv28s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:19:36 np0005475493 nova_compute[262220]: 2025-10-08 10:19:36.573 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7d19d2c6-6de1-4096-99e4-24b4265b9c09/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0qvqv28s" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:19:36 np0005475493 nova_compute[262220]: 2025-10-08 10:19:36.608 2 DEBUG nova.storage.rbd_utils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:19:36 np0005475493 nova_compute[262220]: 2025-10-08 10:19:36.612 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7d19d2c6-6de1-4096-99e4-24b4265b9c09/disk.config 7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:19:36 np0005475493 nova_compute[262220]: 2025-10-08 10:19:36.788 2 DEBUG oslo_concurrency.processutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7d19d2c6-6de1-4096-99e4-24b4265b9c09/disk.config 7d19d2c6-6de1-4096-99e4-24b4265b9c09_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:19:36 np0005475493 nova_compute[262220]: 2025-10-08 10:19:36.789 2 INFO nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Deleting local config drive /var/lib/nova/instances/7d19d2c6-6de1-4096-99e4-24b4265b9c09/disk.config because it was imported into RBD.#033[00m
Oct  8 06:19:36 np0005475493 systemd[1]: Starting libvirt secret daemon...
Oct  8 06:19:36 np0005475493 systemd[1]: Started libvirt secret daemon.
Oct  8 06:19:36 np0005475493 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  8 06:19:36 np0005475493 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  8 06:19:36 np0005475493 kernel: tap29abf06b-1e: entered promiscuous mode
Oct  8 06:19:36 np0005475493 NetworkManager[44872]: <info>  [1759918776.8961] manager: (tap29abf06b-1e): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Oct  8 06:19:36 np0005475493 nova_compute[262220]: 2025-10-08 10:19:36.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:36 np0005475493 ovn_controller[153187]: 2025-10-08T10:19:36Z|00058|binding|INFO|Claiming lport 29abf06b-1e1a-46cb-9cc1-7fa777795883 for this chassis.
Oct  8 06:19:36 np0005475493 ovn_controller[153187]: 2025-10-08T10:19:36Z|00059|binding|INFO|29abf06b-1e1a-46cb-9cc1-7fa777795883: Claiming fa:16:3e:00:0d:2d 10.100.0.8
Oct  8 06:19:36 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:36.919 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:0d:2d 10.100.0.8'], port_security=['fa:16:3e:00:0d:2d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '7d19d2c6-6de1-4096-99e4-24b4265b9c09', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c18c7476-aaa8-4977-81b5-fb17e88446e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '19e068da-96ae-4c4d-8c61-2ea91c3392b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ff1baa8-ffa0-48d3-9c93-32e63e4450d8, chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], logical_port=29abf06b-1e1a-46cb-9cc1-7fa777795883) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  8 06:19:36 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:36.920 163175 INFO neutron.agent.ovn.metadata.agent [-] Port 29abf06b-1e1a-46cb-9cc1-7fa777795883 in datapath c18c7476-aaa8-4977-81b5-fb17e88446e2 bound to our chassis#033[00m
Oct  8 06:19:36 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:36.922 163175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c18c7476-aaa8-4977-81b5-fb17e88446e2#033[00m
Oct  8 06:19:36 np0005475493 systemd-udevd[281306]: Network interface NamePolicy= disabled on kernel command line.
Oct  8 06:19:36 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:36.935 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[fcb0bca7-9f2c-4751-be72-c3b29ed41703]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:19:36 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:36.936 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc18c7476-a1 in ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  8 06:19:36 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:36.937 267781 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc18c7476-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  8 06:19:36 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:36.937 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[ef575866-8c57-44e2-b1fd-4fa3305662fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:19:36 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:36.938 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[bb7d8536-7a7f-482f-884d-a7ed5b2e95d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:19:36 np0005475493 systemd-machined[216030]: New machine qemu-3-instance-0000000b.
Oct  8 06:19:36 np0005475493 NetworkManager[44872]: <info>  [1759918776.9451] device (tap29abf06b-1e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  8 06:19:36 np0005475493 NetworkManager[44872]: <info>  [1759918776.9460] device (tap29abf06b-1e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  8 06:19:36 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:36.957 163290 DEBUG oslo.privsep.daemon [-] privsep: reply[6471832a-e78d-4706-a998-9bb9df0c1f9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:19:36 np0005475493 nova_compute[262220]: 2025-10-08 10:19:36.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:36 np0005475493 systemd[1]: Started Virtual Machine qemu-3-instance-0000000b.
Oct  8 06:19:36 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:36.978 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[93f9d3aa-dd3a-41ee-a58b-a6d69e28dd8a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:19:36 np0005475493 ovn_controller[153187]: 2025-10-08T10:19:36Z|00060|binding|INFO|Setting lport 29abf06b-1e1a-46cb-9cc1-7fa777795883 ovn-installed in OVS
Oct  8 06:19:36 np0005475493 ovn_controller[153187]: 2025-10-08T10:19:36Z|00061|binding|INFO|Setting lport 29abf06b-1e1a-46cb-9cc1-7fa777795883 up in Southbound
Oct  8 06:19:36 np0005475493 nova_compute[262220]: 2025-10-08 10:19:36.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.012 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[89456e48-3177-48f1-b4b9-85599e84c561]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:19:37 np0005475493 NetworkManager[44872]: <info>  [1759918777.0179] manager: (tapc18c7476-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.016 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[fa53b7c8-f9f9-4413-a7e0-969a3c48d752]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:19:37 np0005475493 systemd-udevd[281311]: Network interface NamePolicy= disabled on kernel command line.
Oct  8 06:19:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:19:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:37.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.051 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[52c4c23d-bfe6-4965-ad17-a7256ed516c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.053 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[de94b646-07e2-4401-ae5c-ce25de9c93a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:19:37 np0005475493 NetworkManager[44872]: <info>  [1759918777.0792] device (tapc18c7476-a0): carrier: link connected
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.085 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[a997c89a-b35e-4542-bdcb-426e9be5690d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.103 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[6cebdbe6-80d1-4d63-9f95-76b48119cf1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc18c7476-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:d8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473571, 'reachable_time': 17333, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281340, 'error': None, 'target': 'ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.123 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[e761b673-2425-466c-8246-666cfa8876e7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:d8a7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473571, 'tstamp': 473571}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281341, 'error': None, 'target': 'ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.141 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[e6537f43-cc12-454f-85a0-5c84df63d9fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc18c7476-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:d8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473571, 'reachable_time': 17333, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281342, 'error': None, 'target': 'ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.173 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[7aeadf71-29c4-4127-b7ea-1e432f128361]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:19:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:19:37.186Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.239 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[71f16e09-ff6d-4832-855e-1eeb2c2967d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.241 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc18c7476-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.241 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.242 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc18c7476-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:19:37 np0005475493 NetworkManager[44872]: <info>  [1759918777.2445] manager: (tapc18c7476-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Oct  8 06:19:37 np0005475493 kernel: tapc18c7476-a0: entered promiscuous mode
Oct  8 06:19:37 np0005475493 nova_compute[262220]: 2025-10-08 10:19:37.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:37 np0005475493 nova_compute[262220]: 2025-10-08 10:19:37.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.248 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc18c7476-a0, col_values=(('external_ids', {'iface-id': '10afe0a1-7000-43ca-a48a-2022b8edbb06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:19:37 np0005475493 nova_compute[262220]: 2025-10-08 10:19:37.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:37 np0005475493 ovn_controller[153187]: 2025-10-08T10:19:37Z|00062|binding|INFO|Releasing lport 10afe0a1-7000-43ca-a48a-2022b8edbb06 from this chassis (sb_readonly=0)
Oct  8 06:19:37 np0005475493 nova_compute[262220]: 2025-10-08 10:19:37.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.264 163175 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c18c7476-aaa8-4977-81b5-fb17e88446e2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c18c7476-aaa8-4977-81b5-fb17e88446e2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.265 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[e395c04b-8b74-4eda-a298-b90d06d17a3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.266 163175 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]: global
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]:    log         /dev/log local0 debug
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]:    log-tag     haproxy-metadata-proxy-c18c7476-aaa8-4977-81b5-fb17e88446e2
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]:    user        root
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]:    group       root
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]:    maxconn     1024
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]:    pidfile     /var/lib/neutron/external/pids/c18c7476-aaa8-4977-81b5-fb17e88446e2.pid.haproxy
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]:    daemon
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]: 
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]: defaults
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]:    log global
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]:    mode http
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]:    option httplog
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]:    option dontlognull
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]:    option http-server-close
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]:    option forwardfor
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]:    retries                 3
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]:    timeout http-request    30s
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]:    timeout connect         30s
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]:    timeout client          32s
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]:    timeout server          32s
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]:    timeout http-keep-alive 30s
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]: 
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]: 
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]: listen listener
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]:    bind 169.254.169.254:80
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]:    server metadata /var/lib/neutron/metadata_proxy
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]:    http-request add-header X-OVN-Network-ID c18c7476-aaa8-4977-81b5-fb17e88446e2
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  8 06:19:37 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:37.266 163175 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2', 'env', 'PROCESS_TAG=haproxy-c18c7476-aaa8-4977-81b5-fb17e88446e2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c18c7476-aaa8-4977-81b5-fb17e88446e2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  8 06:19:37 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1051: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  8 06:19:37 np0005475493 podman[281417]: 2025-10-08 10:19:37.756812435 +0000 UTC m=+0.101084745 container create 7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  8 06:19:37 np0005475493 podman[281417]: 2025-10-08 10:19:37.679294252 +0000 UTC m=+0.023566562 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  8 06:19:37 np0005475493 systemd[1]: Started libpod-conmon-7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce.scope.
Oct  8 06:19:37 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:19:37 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/669b7976e7c613a7666c66b557e5e70955b0380381cfc69b3da6fa8e03ce9e5e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  8 06:19:37 np0005475493 podman[281417]: 2025-10-08 10:19:37.94184591 +0000 UTC m=+0.286118250 container init 7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  8 06:19:37 np0005475493 podman[281417]: 2025-10-08 10:19:37.948897088 +0000 UTC m=+0.293169408 container start 7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  8 06:19:37 np0005475493 neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2[281434]: [NOTICE]   (281438) : New worker (281441) forked
Oct  8 06:19:37 np0005475493 neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2[281434]: [NOTICE]   (281438) : Loading success.
Oct  8 06:19:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:38.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.020 2 DEBUG nova.compute.manager [req-12d60aa9-c85e-4810-be5e-a1100d08d2cd req-ac6331dd-f224-4f53-8db7-8679ac79f73e 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.022 2 DEBUG oslo_concurrency.lockutils [req-12d60aa9-c85e-4810-be5e-a1100d08d2cd req-ac6331dd-f224-4f53-8db7-8679ac79f73e 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.022 2 DEBUG oslo_concurrency.lockutils [req-12d60aa9-c85e-4810-be5e-a1100d08d2cd req-ac6331dd-f224-4f53-8db7-8679ac79f73e 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.022 2 DEBUG oslo_concurrency.lockutils [req-12d60aa9-c85e-4810-be5e-a1100d08d2cd req-ac6331dd-f224-4f53-8db7-8679ac79f73e 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.023 2 DEBUG nova.compute.manager [req-12d60aa9-c85e-4810-be5e-a1100d08d2cd req-ac6331dd-f224-4f53-8db7-8679ac79f73e 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Processing event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.133 2 DEBUG nova.compute.manager [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.134 2 DEBUG nova.virt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Emitting event <LifecycleEvent: 1759918778.1329854, 7d19d2c6-6de1-4096-99e4-24b4265b9c09 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.135 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] VM Started (Lifecycle Event)#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.138 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.147 2 INFO nova.virt.libvirt.driver [-] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Instance spawned successfully.#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.149 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.155 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.157 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.171 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.172 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.172 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.172 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.173 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.173 2 DEBUG nova.virt.libvirt.driver [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.183 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.183 2 DEBUG nova.virt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Emitting event <LifecycleEvent: 1759918778.1331172, 7d19d2c6-6de1-4096-99e4-24b4265b9c09 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.183 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] VM Paused (Lifecycle Event)#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.209 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.213 2 DEBUG nova.virt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Emitting event <LifecycleEvent: 1759918778.1372397, 7d19d2c6-6de1-4096-99e4-24b4265b9c09 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.213 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] VM Resumed (Lifecycle Event)#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.235 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.239 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.242 2 INFO nova.compute.manager [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Took 10.36 seconds to spawn the instance on the hypervisor.#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.243 2 DEBUG nova.compute.manager [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.259 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.316 2 INFO nova.compute.manager [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Took 11.31 seconds to build instance.#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:38 np0005475493 nova_compute[262220]: 2025-10-08 10:19:38.344 2 DEBUG oslo_concurrency.lockutils [None req-3edbde0e-48a5-4a63-97da-d095ee48c43d d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:19:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:19:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:19:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:19:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:19:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:19:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:39.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:19:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:19:39 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1052: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct  8 06:19:39 np0005475493 podman[281451]: 2025-10-08 10:19:39.908865456 +0000 UTC m=+0.065295109 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:19:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:19:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:40.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:19:40 np0005475493 nova_compute[262220]: 2025-10-08 10:19:40.120 2 DEBUG nova.compute.manager [req-91f62958-9760-4588-b93d-4861031fbb62 req-06e34ce3-c586-4cb9-86b5-afe34fdba1af 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:19:40 np0005475493 nova_compute[262220]: 2025-10-08 10:19:40.120 2 DEBUG oslo_concurrency.lockutils [req-91f62958-9760-4588-b93d-4861031fbb62 req-06e34ce3-c586-4cb9-86b5-afe34fdba1af 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:19:40 np0005475493 nova_compute[262220]: 2025-10-08 10:19:40.120 2 DEBUG oslo_concurrency.lockutils [req-91f62958-9760-4588-b93d-4861031fbb62 req-06e34ce3-c586-4cb9-86b5-afe34fdba1af 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:19:40 np0005475493 nova_compute[262220]: 2025-10-08 10:19:40.121 2 DEBUG oslo_concurrency.lockutils [req-91f62958-9760-4588-b93d-4861031fbb62 req-06e34ce3-c586-4cb9-86b5-afe34fdba1af 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:19:40 np0005475493 nova_compute[262220]: 2025-10-08 10:19:40.121 2 DEBUG nova.compute.manager [req-91f62958-9760-4588-b93d-4861031fbb62 req-06e34ce3-c586-4cb9-86b5-afe34fdba1af 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] No waiting events found dispatching network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  8 06:19:40 np0005475493 nova_compute[262220]: 2025-10-08 10:19:40.121 2 WARNING nova.compute.manager [req-91f62958-9760-4588-b93d-4861031fbb62 req-06e34ce3-c586-4cb9-86b5-afe34fdba1af 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received unexpected event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 for instance with vm_state active and task_state None.#033[00m
Oct  8 06:19:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=infra.usagestats t=2025-10-08T10:19:40.478312924Z level=info msg="Usage stats are ready to report"
Oct  8 06:19:40 np0005475493 nova_compute[262220]: 2025-10-08 10:19:40.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:41.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:41 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1053: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 12 KiB/s wr, 10 op/s
Oct  8 06:19:41 np0005475493 nova_compute[262220]: 2025-10-08 10:19:41.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:19:41 np0005475493 nova_compute[262220]: 2025-10-08 10:19:41.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  8 06:19:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:19:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:42.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:19:42 np0005475493 NetworkManager[44872]: <info>  [1759918782.8475] manager: (patch-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Oct  8 06:19:42 np0005475493 NetworkManager[44872]: <info>  [1759918782.8488] manager: (patch-br-int-to-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct  8 06:19:42 np0005475493 nova_compute[262220]: 2025-10-08 10:19:42.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:42 np0005475493 ovn_controller[153187]: 2025-10-08T10:19:42Z|00063|binding|INFO|Releasing lport 10afe0a1-7000-43ca-a48a-2022b8edbb06 from this chassis (sb_readonly=0)
Oct  8 06:19:42 np0005475493 nova_compute[262220]: 2025-10-08 10:19:42.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:42 np0005475493 ovn_controller[153187]: 2025-10-08T10:19:42Z|00064|binding|INFO|Releasing lport 10afe0a1-7000-43ca-a48a-2022b8edbb06 from this chassis (sb_readonly=0)
Oct  8 06:19:42 np0005475493 nova_compute[262220]: 2025-10-08 10:19:42.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:19:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:43.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:19:43 np0005475493 nova_compute[262220]: 2025-10-08 10:19:43.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:43 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1054: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 12 KiB/s wr, 10 op/s
Oct  8 06:19:43 np0005475493 nova_compute[262220]: 2025-10-08 10:19:43.619 2 DEBUG nova.compute.manager [req-2a91aad3-0a62-4f7b-b0d7-bbd1ce05ae11 req-478e4d67-69ce-482b-ba5f-f6b80e2eb840 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-changed-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:19:43 np0005475493 nova_compute[262220]: 2025-10-08 10:19:43.620 2 DEBUG nova.compute.manager [req-2a91aad3-0a62-4f7b-b0d7-bbd1ce05ae11 req-478e4d67-69ce-482b-ba5f-f6b80e2eb840 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Refreshing instance network info cache due to event network-changed-29abf06b-1e1a-46cb-9cc1-7fa777795883. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  8 06:19:43 np0005475493 nova_compute[262220]: 2025-10-08 10:19:43.620 2 DEBUG oslo_concurrency.lockutils [req-2a91aad3-0a62-4f7b-b0d7-bbd1ce05ae11 req-478e4d67-69ce-482b-ba5f-f6b80e2eb840 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:19:43 np0005475493 nova_compute[262220]: 2025-10-08 10:19:43.620 2 DEBUG oslo_concurrency.lockutils [req-2a91aad3-0a62-4f7b-b0d7-bbd1ce05ae11 req-478e4d67-69ce-482b-ba5f-f6b80e2eb840 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:19:43 np0005475493 nova_compute[262220]: 2025-10-08 10:19:43.620 2 DEBUG nova.network.neutron [req-2a91aad3-0a62-4f7b-b0d7-bbd1ce05ae11 req-478e4d67-69ce-482b-ba5f-f6b80e2eb840 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Refreshing network info cache for port 29abf06b-1e1a-46cb-9cc1-7fa777795883 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  8 06:19:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:19:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:19:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:19:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:19:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:44.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:19:44 np0005475493 nova_compute[262220]: 2025-10-08 10:19:44.648 2 DEBUG nova.network.neutron [req-2a91aad3-0a62-4f7b-b0d7-bbd1ce05ae11 req-478e4d67-69ce-482b-ba5f-f6b80e2eb840 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updated VIF entry in instance network info cache for port 29abf06b-1e1a-46cb-9cc1-7fa777795883. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  8 06:19:44 np0005475493 nova_compute[262220]: 2025-10-08 10:19:44.649 2 DEBUG nova.network.neutron [req-2a91aad3-0a62-4f7b-b0d7-bbd1ce05ae11 req-478e4d67-69ce-482b-ba5f-f6b80e2eb840 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updating instance_info_cache with network_info: [{"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:19:44 np0005475493 nova_compute[262220]: 2025-10-08 10:19:44.666 2 DEBUG oslo_concurrency.lockutils [req-2a91aad3-0a62-4f7b-b0d7-bbd1ce05ae11 req-478e4d67-69ce-482b-ba5f-f6b80e2eb840 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:19:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:19:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:45.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:19:45 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1055: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Oct  8 06:19:45 np0005475493 nova_compute[262220]: 2025-10-08 10:19:45.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:19:45] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Oct  8 06:19:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:19:45] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Oct  8 06:19:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:46.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:46 np0005475493 nova_compute[262220]: 2025-10-08 10:19:46.905 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:19:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:19:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:47.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:19:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:19:47.187Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:19:47 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1056: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct  8 06:19:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:19:47
Oct  8 06:19:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:19:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:19:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', '.rgw.root', '.mgr', 'cephfs.cephfs.data', '.nfs', 'default.rgw.control']
Oct  8 06:19:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:19:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:19:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:19:47 np0005475493 nova_compute[262220]: 2025-10-08 10:19:47.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:19:47 np0005475493 nova_compute[262220]: 2025-10-08 10:19:47.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:19:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:19:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:19:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:48.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:19:48 np0005475493 nova_compute[262220]: 2025-10-08 10:19:48.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:48 np0005475493 ceph-mgr[73869]: [dashboard INFO request] [192.168.122.100:34350] [POST] [200] [0.002s] [4.0B] [4da8c34a-8050-4a36-a28d-df46569e208a] /api/prometheus_receiver
Oct  8 06:19:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:19:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:19:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:19:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:19:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:49.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:19:49 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1057: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct  8 06:19:49 np0005475493 nova_compute[262220]: 2025-10-08 10:19:49.882 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:19:49 np0005475493 nova_compute[262220]: 2025-10-08 10:19:49.885 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:19:49 np0005475493 nova_compute[262220]: 2025-10-08 10:19:49.885 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  8 06:19:49 np0005475493 nova_compute[262220]: 2025-10-08 10:19:49.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  8 06:19:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:50.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:50 np0005475493 nova_compute[262220]: 2025-10-08 10:19:50.507 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:19:50 np0005475493 nova_compute[262220]: 2025-10-08 10:19:50.507 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquired lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:19:50 np0005475493 nova_compute[262220]: 2025-10-08 10:19:50.507 2 DEBUG nova.network.neutron [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  8 06:19:50 np0005475493 nova_compute[262220]: 2025-10-08 10:19:50.507 2 DEBUG nova.objects.instance [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7d19d2c6-6de1-4096-99e4-24b4265b9c09 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  8 06:19:50 np0005475493 nova_compute[262220]: 2025-10-08 10:19:50.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:50 np0005475493 podman[281509]: 2025-10-08 10:19:50.920465569 +0000 UTC m=+0.084836901 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  8 06:19:50 np0005475493 nova_compute[262220]: 2025-10-08 10:19:50.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:50 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:50.992 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  8 06:19:50 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:50.993 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  8 06:19:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:51.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:51 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1058: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 65 op/s
Oct  8 06:19:51 np0005475493 ovn_controller[153187]: 2025-10-08T10:19:51Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:00:0d:2d 10.100.0.8
Oct  8 06:19:51 np0005475493 ovn_controller[153187]: 2025-10-08T10:19:51Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:00:0d:2d 10.100.0.8
Oct  8 06:19:51 np0005475493 nova_compute[262220]: 2025-10-08 10:19:51.694 2 DEBUG nova.network.neutron [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updating instance_info_cache with network_info: [{"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:19:51 np0005475493 nova_compute[262220]: 2025-10-08 10:19:51.715 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Releasing lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:19:51 np0005475493 nova_compute[262220]: 2025-10-08 10:19:51.716 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  8 06:19:51 np0005475493 nova_compute[262220]: 2025-10-08 10:19:51.716 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:19:51 np0005475493 nova_compute[262220]: 2025-10-08 10:19:51.716 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:19:51 np0005475493 nova_compute[262220]: 2025-10-08 10:19:51.818 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:19:51 np0005475493 nova_compute[262220]: 2025-10-08 10:19:51.819 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:19:51 np0005475493 nova_compute[262220]: 2025-10-08 10:19:51.819 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:19:51 np0005475493 nova_compute[262220]: 2025-10-08 10:19:51.820 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:19:51 np0005475493 nova_compute[262220]: 2025-10-08 10:19:51.820 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:19:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:52.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:19:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2581363615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:19:52 np0005475493 nova_compute[262220]: 2025-10-08 10:19:52.260 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:19:52 np0005475493 nova_compute[262220]: 2025-10-08 10:19:52.376 2 DEBUG nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  8 06:19:52 np0005475493 nova_compute[262220]: 2025-10-08 10:19:52.377 2 DEBUG nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  8 06:19:52 np0005475493 nova_compute[262220]: 2025-10-08 10:19:52.624 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:19:52 np0005475493 nova_compute[262220]: 2025-10-08 10:19:52.626 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4333MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:19:52 np0005475493 nova_compute[262220]: 2025-10-08 10:19:52.626 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:19:52 np0005475493 nova_compute[262220]: 2025-10-08 10:19:52.627 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:19:52 np0005475493 nova_compute[262220]: 2025-10-08 10:19:52.744 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Instance 7d19d2c6-6de1-4096-99e4-24b4265b9c09 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  8 06:19:52 np0005475493 nova_compute[262220]: 2025-10-08 10:19:52.745 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:19:52 np0005475493 nova_compute[262220]: 2025-10-08 10:19:52.745 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:19:52 np0005475493 nova_compute[262220]: 2025-10-08 10:19:52.793 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing inventories for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  8 06:19:52 np0005475493 nova_compute[262220]: 2025-10-08 10:19:52.852 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating ProviderTree inventory for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  8 06:19:52 np0005475493 nova_compute[262220]: 2025-10-08 10:19:52.853 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating inventory in ProviderTree for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  8 06:19:52 np0005475493 nova_compute[262220]: 2025-10-08 10:19:52.867 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing aggregate associations for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  8 06:19:52 np0005475493 nova_compute[262220]: 2025-10-08 10:19:52.890 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing trait associations for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2, traits: HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,HW_CPU_X86_SSE2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  8 06:19:52 np0005475493 nova_compute[262220]: 2025-10-08 10:19:52.919 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:19:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:53.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:19:53 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2726016370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:19:53 np0005475493 nova_compute[262220]: 2025-10-08 10:19:53.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:53 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1059: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 65 op/s
Oct  8 06:19:53 np0005475493 nova_compute[262220]: 2025-10-08 10:19:53.352 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:19:53 np0005475493 nova_compute[262220]: 2025-10-08 10:19:53.357 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:19:53 np0005475493 nova_compute[262220]: 2025-10-08 10:19:53.373 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:19:53 np0005475493 nova_compute[262220]: 2025-10-08 10:19:53.395 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:19:53 np0005475493 nova_compute[262220]: 2025-10-08 10:19:53.395 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:19:53 np0005475493 nova_compute[262220]: 2025-10-08 10:19:53.396 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:19:53 np0005475493 nova_compute[262220]: 2025-10-08 10:19:53.396 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  8 06:19:53 np0005475493 nova_compute[262220]: 2025-10-08 10:19:53.412 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  8 06:19:53 np0005475493 nova_compute[262220]: 2025-10-08 10:19:53.413 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:19:53 np0005475493 nova_compute[262220]: 2025-10-08 10:19:53.594 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:19:53 np0005475493 nova_compute[262220]: 2025-10-08 10:19:53.595 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  8 06:19:53 np0005475493 nova_compute[262220]: 2025-10-08 10:19:53.888 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:19:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:19:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:19:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:19:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:19:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:19:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:54.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:19:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:19:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:19:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:55.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:19:55 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1060: 353 pgs: 353 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Oct  8 06:19:55 np0005475493 nova_compute[262220]: 2025-10-08 10:19:55.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:19:55] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Oct  8 06:19:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:19:55] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Oct  8 06:19:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:56.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:19:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:57.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:19:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:19:57.189Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:19:57 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:19:57 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:19:57 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:19:57 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:19:57 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1061: 353 pgs: 353 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct  8 06:19:57 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:19:57 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:19:57 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:19:57 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:19:57 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:19:57 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:19:57 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:19:57 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:19:57 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:19:57 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:19:57 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:19:57 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:19:57 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:19:57 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:19:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:57.418 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:19:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:57.418 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:19:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:57.419 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:19:57 np0005475493 podman[281693]: 2025-10-08 10:19:57.511828921 +0000 UTC m=+0.064994780 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  8 06:19:57 np0005475493 podman[281694]: 2025-10-08 10:19:57.526572867 +0000 UTC m=+0.079416265 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  8 06:19:57 np0005475493 podman[281794]: 2025-10-08 10:19:57.887930285 +0000 UTC m=+0.042204334 container create 567cca5d9ec0a5bc08ec0e2ab838f639f62abf066e051dcf08f10927dfddf025 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:19:57 np0005475493 systemd[1]: Started libpod-conmon-567cca5d9ec0a5bc08ec0e2ab838f639f62abf066e051dcf08f10927dfddf025.scope.
Oct  8 06:19:57 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:19:57 np0005475493 podman[281794]: 2025-10-08 10:19:57.865675327 +0000 UTC m=+0.019949396 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:19:57 np0005475493 podman[281794]: 2025-10-08 10:19:57.974816211 +0000 UTC m=+0.129090300 container init 567cca5d9ec0a5bc08ec0e2ab838f639f62abf066e051dcf08f10927dfddf025 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_tesla, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:19:57 np0005475493 podman[281794]: 2025-10-08 10:19:57.98219981 +0000 UTC m=+0.136473859 container start 567cca5d9ec0a5bc08ec0e2ab838f639f62abf066e051dcf08f10927dfddf025 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  8 06:19:57 np0005475493 podman[281794]: 2025-10-08 10:19:57.986148957 +0000 UTC m=+0.140423036 container attach 567cca5d9ec0a5bc08ec0e2ab838f639f62abf066e051dcf08f10927dfddf025 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  8 06:19:57 np0005475493 vibrant_tesla[281810]: 167 167
Oct  8 06:19:57 np0005475493 systemd[1]: libpod-567cca5d9ec0a5bc08ec0e2ab838f639f62abf066e051dcf08f10927dfddf025.scope: Deactivated successfully.
Oct  8 06:19:57 np0005475493 podman[281794]: 2025-10-08 10:19:57.987952906 +0000 UTC m=+0.142226955 container died 567cca5d9ec0a5bc08ec0e2ab838f639f62abf066e051dcf08f10927dfddf025 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  8 06:19:58 np0005475493 systemd[1]: var-lib-containers-storage-overlay-50b6c399de0b013e2e4308e6876fd8939c4d39863206f096db4b003ee8c0d619-merged.mount: Deactivated successfully.
Oct  8 06:19:58 np0005475493 podman[281794]: 2025-10-08 10:19:58.032940759 +0000 UTC m=+0.187214808 container remove 567cca5d9ec0a5bc08ec0e2ab838f639f62abf066e051dcf08f10927dfddf025 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  8 06:19:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:19:58.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:58 np0005475493 systemd[1]: libpod-conmon-567cca5d9ec0a5bc08ec0e2ab838f639f62abf066e051dcf08f10927dfddf025.scope: Deactivated successfully.
Oct  8 06:19:58 np0005475493 podman[281833]: 2025-10-08 10:19:58.183749228 +0000 UTC m=+0.039379152 container create e410e570645a9695b25898e571c69db89ecf70c72e654391cd12501a2e0cfb79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_poincare, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:19:58 np0005475493 systemd[1]: Started libpod-conmon-e410e570645a9695b25898e571c69db89ecf70c72e654391cd12501a2e0cfb79.scope.
Oct  8 06:19:58 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:19:58 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee1692a2fadb5875bb508feaaa9ef426776b063f86e18decec2af0355f7d71cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:19:58 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee1692a2fadb5875bb508feaaa9ef426776b063f86e18decec2af0355f7d71cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:19:58 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee1692a2fadb5875bb508feaaa9ef426776b063f86e18decec2af0355f7d71cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:19:58 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee1692a2fadb5875bb508feaaa9ef426776b063f86e18decec2af0355f7d71cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:19:58 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee1692a2fadb5875bb508feaaa9ef426776b063f86e18decec2af0355f7d71cf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:19:58 np0005475493 podman[281833]: 2025-10-08 10:19:58.258706268 +0000 UTC m=+0.114336222 container init e410e570645a9695b25898e571c69db89ecf70c72e654391cd12501a2e0cfb79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_poincare, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  8 06:19:58 np0005475493 podman[281833]: 2025-10-08 10:19:58.167585805 +0000 UTC m=+0.023215749 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:19:58 np0005475493 podman[281833]: 2025-10-08 10:19:58.273060382 +0000 UTC m=+0.128690306 container start e410e570645a9695b25898e571c69db89ecf70c72e654391cd12501a2e0cfb79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_poincare, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  8 06:19:58 np0005475493 podman[281833]: 2025-10-08 10:19:58.278395314 +0000 UTC m=+0.134025258 container attach e410e570645a9695b25898e571c69db89ecf70c72e654391cd12501a2e0cfb79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_poincare, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  8 06:19:58 np0005475493 nova_compute[262220]: 2025-10-08 10:19:58.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:19:58 np0005475493 magical_poincare[281850]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:19:58 np0005475493 magical_poincare[281850]: --> All data devices are unavailable
Oct  8 06:19:58 np0005475493 systemd[1]: libpod-e410e570645a9695b25898e571c69db89ecf70c72e654391cd12501a2e0cfb79.scope: Deactivated successfully.
Oct  8 06:19:58 np0005475493 podman[281833]: 2025-10-08 10:19:58.592612701 +0000 UTC m=+0.448242645 container died e410e570645a9695b25898e571c69db89ecf70c72e654391cd12501a2e0cfb79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_poincare, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Oct  8 06:19:58 np0005475493 systemd[1]: var-lib-containers-storage-overlay-ee1692a2fadb5875bb508feaaa9ef426776b063f86e18decec2af0355f7d71cf-merged.mount: Deactivated successfully.
Oct  8 06:19:58 np0005475493 podman[281833]: 2025-10-08 10:19:58.647264915 +0000 UTC m=+0.502894839 container remove e410e570645a9695b25898e571c69db89ecf70c72e654391cd12501a2e0cfb79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_poincare, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct  8 06:19:58 np0005475493 systemd[1]: libpod-conmon-e410e570645a9695b25898e571c69db89ecf70c72e654391cd12501a2e0cfb79.scope: Deactivated successfully.
Oct  8 06:19:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:19:58.844Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:19:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:19:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:19:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:19:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:19:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:19:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:19:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:19:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:19:59.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:19:59 np0005475493 podman[281974]: 2025-10-08 10:19:59.231942355 +0000 UTC m=+0.038298788 container create e146052553288dc8e161aab44af24abe147d3336ac7aa7bfc117a8a696caea1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  8 06:19:59 np0005475493 systemd[1]: Started libpod-conmon-e146052553288dc8e161aab44af24abe147d3336ac7aa7bfc117a8a696caea1b.scope.
Oct  8 06:19:59 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:19:59 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1062: 353 pgs: 353 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 167 op/s
Oct  8 06:19:59 np0005475493 podman[281974]: 2025-10-08 10:19:59.303772835 +0000 UTC m=+0.110129288 container init e146052553288dc8e161aab44af24abe147d3336ac7aa7bfc117a8a696caea1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_sinoussi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  8 06:19:59 np0005475493 podman[281974]: 2025-10-08 10:19:59.214657566 +0000 UTC m=+0.021014029 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:19:59 np0005475493 podman[281974]: 2025-10-08 10:19:59.310276144 +0000 UTC m=+0.116632577 container start e146052553288dc8e161aab44af24abe147d3336ac7aa7bfc117a8a696caea1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True)
Oct  8 06:19:59 np0005475493 podman[281974]: 2025-10-08 10:19:59.313277671 +0000 UTC m=+0.119634114 container attach e146052553288dc8e161aab44af24abe147d3336ac7aa7bfc117a8a696caea1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:19:59 np0005475493 loving_sinoussi[281990]: 167 167
Oct  8 06:19:59 np0005475493 systemd[1]: libpod-e146052553288dc8e161aab44af24abe147d3336ac7aa7bfc117a8a696caea1b.scope: Deactivated successfully.
Oct  8 06:19:59 np0005475493 podman[281974]: 2025-10-08 10:19:59.316506595 +0000 UTC m=+0.122863028 container died e146052553288dc8e161aab44af24abe147d3336ac7aa7bfc117a8a696caea1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:19:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:19:59 np0005475493 systemd[1]: var-lib-containers-storage-overlay-f6b1cc7940ab16c10bce76c2623833643e3a08eedbeb16dc4a3707743970c9f8-merged.mount: Deactivated successfully.
Oct  8 06:19:59 np0005475493 podman[281974]: 2025-10-08 10:19:59.353766649 +0000 UTC m=+0.160123082 container remove e146052553288dc8e161aab44af24abe147d3336ac7aa7bfc117a8a696caea1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct  8 06:19:59 np0005475493 systemd[1]: libpod-conmon-e146052553288dc8e161aab44af24abe147d3336ac7aa7bfc117a8a696caea1b.scope: Deactivated successfully.
Oct  8 06:19:59 np0005475493 podman[282014]: 2025-10-08 10:19:59.51236032 +0000 UTC m=+0.040687225 container create bc363d00e776aaf57306b37cd9f76f063fea8e59af2cb46c4d87e5cf55bf5887 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:19:59 np0005475493 systemd[1]: Started libpod-conmon-bc363d00e776aaf57306b37cd9f76f063fea8e59af2cb46c4d87e5cf55bf5887.scope.
Oct  8 06:19:59 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:19:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3daeb8f8d70b37c9da0d1a191dffd58d48a5d16f1955f552bab203d9eaf8bef4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:19:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3daeb8f8d70b37c9da0d1a191dffd58d48a5d16f1955f552bab203d9eaf8bef4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:19:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3daeb8f8d70b37c9da0d1a191dffd58d48a5d16f1955f552bab203d9eaf8bef4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:19:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3daeb8f8d70b37c9da0d1a191dffd58d48a5d16f1955f552bab203d9eaf8bef4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:19:59 np0005475493 podman[282014]: 2025-10-08 10:19:59.494761751 +0000 UTC m=+0.023088666 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:19:59 np0005475493 podman[282014]: 2025-10-08 10:19:59.604725062 +0000 UTC m=+0.133051967 container init bc363d00e776aaf57306b37cd9f76f063fea8e59af2cb46c4d87e5cf55bf5887 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:19:59 np0005475493 podman[282014]: 2025-10-08 10:19:59.611842253 +0000 UTC m=+0.140169138 container start bc363d00e776aaf57306b37cd9f76f063fea8e59af2cb46c4d87e5cf55bf5887 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mahavira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:19:59 np0005475493 podman[282014]: 2025-10-08 10:19:59.615726077 +0000 UTC m=+0.144052982 container attach bc363d00e776aaf57306b37cd9f76f063fea8e59af2cb46c4d87e5cf55bf5887 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]: {
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:    "1": [
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:        {
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:            "devices": [
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:                "/dev/loop3"
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:            ],
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:            "lv_name": "ceph_lv0",
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:            "lv_size": "21470642176",
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:            "name": "ceph_lv0",
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:            "tags": {
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:                "ceph.cluster_name": "ceph",
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:                "ceph.crush_device_class": "",
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:                "ceph.encrypted": "0",
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:                "ceph.osd_id": "1",
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:                "ceph.type": "block",
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:                "ceph.vdo": "0",
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:                "ceph.with_tpm": "0"
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:            },
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:            "type": "block",
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:            "vg_name": "ceph_vg0"
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:        }
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]:    ]
Oct  8 06:19:59 np0005475493 condescending_mahavira[282030]: }
Oct  8 06:19:59 np0005475493 systemd[1]: libpod-bc363d00e776aaf57306b37cd9f76f063fea8e59af2cb46c4d87e5cf55bf5887.scope: Deactivated successfully.
Oct  8 06:19:59 np0005475493 podman[282014]: 2025-10-08 10:19:59.908836053 +0000 UTC m=+0.437162928 container died bc363d00e776aaf57306b37cd9f76f063fea8e59af2cb46c4d87e5cf55bf5887 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mahavira, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  8 06:19:59 np0005475493 systemd[1]: var-lib-containers-storage-overlay-3daeb8f8d70b37c9da0d1a191dffd58d48a5d16f1955f552bab203d9eaf8bef4-merged.mount: Deactivated successfully.
Oct  8 06:19:59 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:19:59.995 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:20:00 np0005475493 ceph-mon[73572]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Oct  8 06:20:00 np0005475493 ceph-mon[73572]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Oct  8 06:20:00 np0005475493 ceph-mon[73572]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.0.0.compute-1.lgtqnn on compute-1 is in error state
Oct  8 06:20:00 np0005475493 podman[282014]: 2025-10-08 10:20:00.015426034 +0000 UTC m=+0.543752939 container remove bc363d00e776aaf57306b37cd9f76f063fea8e59af2cb46c4d87e5cf55bf5887 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mahavira, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  8 06:20:00 np0005475493 systemd[1]: libpod-conmon-bc363d00e776aaf57306b37cd9f76f063fea8e59af2cb46c4d87e5cf55bf5887.scope: Deactivated successfully.
Oct  8 06:20:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:00.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:00 np0005475493 ceph-mon[73572]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Oct  8 06:20:00 np0005475493 ceph-mon[73572]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Oct  8 06:20:00 np0005475493 ceph-mon[73572]:    daemon nfs.cephfs.0.0.compute-1.lgtqnn on compute-1 is in error state
Oct  8 06:20:00 np0005475493 podman[282144]: 2025-10-08 10:20:00.561171567 +0000 UTC m=+0.035823998 container create 831ab956a855ada7f512d59aec65d943b94e85e7949c64ca14b912ac486d6456 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_beaver, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:20:00 np0005475493 systemd[1]: Started libpod-conmon-831ab956a855ada7f512d59aec65d943b94e85e7949c64ca14b912ac486d6456.scope.
Oct  8 06:20:00 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:20:00 np0005475493 podman[282144]: 2025-10-08 10:20:00.633562115 +0000 UTC m=+0.108214576 container init 831ab956a855ada7f512d59aec65d943b94e85e7949c64ca14b912ac486d6456 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_beaver, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  8 06:20:00 np0005475493 podman[282144]: 2025-10-08 10:20:00.640381834 +0000 UTC m=+0.115034265 container start 831ab956a855ada7f512d59aec65d943b94e85e7949c64ca14b912ac486d6456 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:20:00 np0005475493 podman[282144]: 2025-10-08 10:20:00.546230814 +0000 UTC m=+0.020883265 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:20:00 np0005475493 podman[282144]: 2025-10-08 10:20:00.643378971 +0000 UTC m=+0.118031402 container attach 831ab956a855ada7f512d59aec65d943b94e85e7949c64ca14b912ac486d6456 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_beaver, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct  8 06:20:00 np0005475493 vibrant_beaver[282161]: 167 167
Oct  8 06:20:00 np0005475493 systemd[1]: libpod-831ab956a855ada7f512d59aec65d943b94e85e7949c64ca14b912ac486d6456.scope: Deactivated successfully.
Oct  8 06:20:00 np0005475493 podman[282144]: 2025-10-08 10:20:00.646352177 +0000 UTC m=+0.121004628 container died 831ab956a855ada7f512d59aec65d943b94e85e7949c64ca14b912ac486d6456 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_beaver, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:20:00 np0005475493 systemd[1]: var-lib-containers-storage-overlay-e5a74ca4c9711c5213c7647ddaa7db7488ff9f08d8f860324120153e7f0e74e6-merged.mount: Deactivated successfully.
Oct  8 06:20:00 np0005475493 nova_compute[262220]: 2025-10-08 10:20:00.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:00 np0005475493 podman[282144]: 2025-10-08 10:20:00.677834464 +0000 UTC m=+0.152486895 container remove 831ab956a855ada7f512d59aec65d943b94e85e7949c64ca14b912ac486d6456 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_beaver, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:20:00 np0005475493 systemd[1]: libpod-conmon-831ab956a855ada7f512d59aec65d943b94e85e7949c64ca14b912ac486d6456.scope: Deactivated successfully.
Oct  8 06:20:00 np0005475493 podman[282185]: 2025-10-08 10:20:00.838361907 +0000 UTC m=+0.038643999 container create 6cffc1d7f4231e7e511ba90fe8c4e2dceb92213b10f47da54e22835ded1ef2ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct  8 06:20:00 np0005475493 systemd[1]: Started libpod-conmon-6cffc1d7f4231e7e511ba90fe8c4e2dceb92213b10f47da54e22835ded1ef2ad.scope.
Oct  8 06:20:00 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:20:00 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce08e4d9deb60ddc5424bec8df7b856521f46c91011f60575f0f867c29282780/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:20:00 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce08e4d9deb60ddc5424bec8df7b856521f46c91011f60575f0f867c29282780/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:20:00 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce08e4d9deb60ddc5424bec8df7b856521f46c91011f60575f0f867c29282780/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:20:00 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce08e4d9deb60ddc5424bec8df7b856521f46c91011f60575f0f867c29282780/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:20:00 np0005475493 podman[282185]: 2025-10-08 10:20:00.821114191 +0000 UTC m=+0.021396303 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:20:00 np0005475493 podman[282185]: 2025-10-08 10:20:00.921270275 +0000 UTC m=+0.121552397 container init 6cffc1d7f4231e7e511ba90fe8c4e2dceb92213b10f47da54e22835ded1ef2ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_matsumoto, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:20:00 np0005475493 podman[282185]: 2025-10-08 10:20:00.931879207 +0000 UTC m=+0.132161299 container start 6cffc1d7f4231e7e511ba90fe8c4e2dceb92213b10f47da54e22835ded1ef2ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_matsumoto, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  8 06:20:00 np0005475493 podman[282185]: 2025-10-08 10:20:00.935107391 +0000 UTC m=+0.135389483 container attach 6cffc1d7f4231e7e511ba90fe8c4e2dceb92213b10f47da54e22835ded1ef2ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  8 06:20:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:20:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:01.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:20:01 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1063: 353 pgs: 353 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Oct  8 06:20:01 np0005475493 lvm[282277]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:20:01 np0005475493 lvm[282277]: VG ceph_vg0 finished
Oct  8 06:20:01 np0005475493 youthful_matsumoto[282202]: {}
Oct  8 06:20:01 np0005475493 systemd[1]: libpod-6cffc1d7f4231e7e511ba90fe8c4e2dceb92213b10f47da54e22835ded1ef2ad.scope: Deactivated successfully.
Oct  8 06:20:01 np0005475493 podman[282185]: 2025-10-08 10:20:01.669234577 +0000 UTC m=+0.869516669 container died 6cffc1d7f4231e7e511ba90fe8c4e2dceb92213b10f47da54e22835ded1ef2ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  8 06:20:01 np0005475493 systemd[1]: libpod-6cffc1d7f4231e7e511ba90fe8c4e2dceb92213b10f47da54e22835ded1ef2ad.scope: Consumed 1.130s CPU time.
Oct  8 06:20:01 np0005475493 systemd[1]: var-lib-containers-storage-overlay-ce08e4d9deb60ddc5424bec8df7b856521f46c91011f60575f0f867c29282780-merged.mount: Deactivated successfully.
Oct  8 06:20:01 np0005475493 podman[282185]: 2025-10-08 10:20:01.709023011 +0000 UTC m=+0.909305093 container remove 6cffc1d7f4231e7e511ba90fe8c4e2dceb92213b10f47da54e22835ded1ef2ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_matsumoto, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct  8 06:20:01 np0005475493 systemd[1]: libpod-conmon-6cffc1d7f4231e7e511ba90fe8c4e2dceb92213b10f47da54e22835ded1ef2ad.scope: Deactivated successfully.
Oct  8 06:20:01 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:20:01 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:20:01 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:20:01 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:20:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:02.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:02 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:20:02 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:20:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:20:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:20:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:03.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:03 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1064: 353 pgs: 353 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Oct  8 06:20:03 np0005475493 nova_compute[262220]: 2025-10-08 10:20:03.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:20:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:20:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:20:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:20:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:20:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:04.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:20:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:20:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:20:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:05.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:20:05 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1065: 353 pgs: 353 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Oct  8 06:20:05 np0005475493 nova_compute[262220]: 2025-10-08 10:20:05.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:20:05] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct  8 06:20:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:20:05] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct  8 06:20:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:06.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:20:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:07.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:20:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:07.189Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:20:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:07.190Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:20:07 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1066: 353 pgs: 353 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 75 op/s
Oct  8 06:20:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:20:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:08.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:20:08 np0005475493 nova_compute[262220]: 2025-10-08 10:20:08.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:08.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:20:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:20:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:20:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:20:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:20:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:09.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:09 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1067: 353 pgs: 353 active+clean; 188 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Oct  8 06:20:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:20:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:20:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:10.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:20:10 np0005475493 nova_compute[262220]: 2025-10-08 10:20:10.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:10 np0005475493 podman[282353]: 2025-10-08 10:20:10.950555587 +0000 UTC m=+0.089799200 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid)
Oct  8 06:20:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:20:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:11.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:20:11 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1068: 353 pgs: 353 active+clean; 188 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 265 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Oct  8 06:20:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:12.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:13.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:13 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1069: 353 pgs: 353 active+clean; 188 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 265 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Oct  8 06:20:13 np0005475493 nova_compute[262220]: 2025-10-08 10:20:13.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:20:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:20:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:20:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:20:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:20:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:14.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:20:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:20:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:15.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:15 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1070: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct  8 06:20:15 np0005475493 nova_compute[262220]: 2025-10-08 10:20:15.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:20:15] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct  8 06:20:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:20:15] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct  8 06:20:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:20:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:16.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:20:16 np0005475493 nova_compute[262220]: 2025-10-08 10:20:16.709 2 INFO nova.compute.manager [None req-903dcb2d-0f0e-48f8-b59f-d833809f74e0 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Get console output#033[00m
Oct  8 06:20:16 np0005475493 nova_compute[262220]: 2025-10-08 10:20:16.715 631 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  8 06:20:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:17.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:17.191Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:20:17 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1071: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct  8 06:20:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:20:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:20:17 np0005475493 nova_compute[262220]: 2025-10-08 10:20:17.918 2 DEBUG nova.compute.manager [req-b652370b-0fa0-4c89-99db-a3b956f1a813 req-6c5e239c-d2da-48e9-92b6-6818ac9de2aa 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-changed-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:20:17 np0005475493 nova_compute[262220]: 2025-10-08 10:20:17.918 2 DEBUG nova.compute.manager [req-b652370b-0fa0-4c89-99db-a3b956f1a813 req-6c5e239c-d2da-48e9-92b6-6818ac9de2aa 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Refreshing instance network info cache due to event network-changed-29abf06b-1e1a-46cb-9cc1-7fa777795883. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  8 06:20:17 np0005475493 nova_compute[262220]: 2025-10-08 10:20:17.918 2 DEBUG oslo_concurrency.lockutils [req-b652370b-0fa0-4c89-99db-a3b956f1a813 req-6c5e239c-d2da-48e9-92b6-6818ac9de2aa 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:20:17 np0005475493 nova_compute[262220]: 2025-10-08 10:20:17.918 2 DEBUG oslo_concurrency.lockutils [req-b652370b-0fa0-4c89-99db-a3b956f1a813 req-6c5e239c-d2da-48e9-92b6-6818ac9de2aa 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:20:17 np0005475493 nova_compute[262220]: 2025-10-08 10:20:17.918 2 DEBUG nova.network.neutron [req-b652370b-0fa0-4c89-99db-a3b956f1a813 req-6c5e239c-d2da-48e9-92b6-6818ac9de2aa 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Refreshing network info cache for port 29abf06b-1e1a-46cb-9cc1-7fa777795883 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  8 06:20:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:20:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:20:18 np0005475493 nova_compute[262220]: 2025-10-08 10:20:18.021 2 DEBUG nova.compute.manager [req-81f9c2e6-39a7-4e48-b277-c32135c4ae07 req-648e11aa-fd03-4fd8-869d-f3ac2947b6df 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-vif-unplugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:20:18 np0005475493 nova_compute[262220]: 2025-10-08 10:20:18.021 2 DEBUG oslo_concurrency.lockutils [req-81f9c2e6-39a7-4e48-b277-c32135c4ae07 req-648e11aa-fd03-4fd8-869d-f3ac2947b6df 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:20:18 np0005475493 nova_compute[262220]: 2025-10-08 10:20:18.022 2 DEBUG oslo_concurrency.lockutils [req-81f9c2e6-39a7-4e48-b277-c32135c4ae07 req-648e11aa-fd03-4fd8-869d-f3ac2947b6df 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:20:18 np0005475493 nova_compute[262220]: 2025-10-08 10:20:18.022 2 DEBUG oslo_concurrency.lockutils [req-81f9c2e6-39a7-4e48-b277-c32135c4ae07 req-648e11aa-fd03-4fd8-869d-f3ac2947b6df 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:20:18 np0005475493 nova_compute[262220]: 2025-10-08 10:20:18.022 2 DEBUG nova.compute.manager [req-81f9c2e6-39a7-4e48-b277-c32135c4ae07 req-648e11aa-fd03-4fd8-869d-f3ac2947b6df 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] No waiting events found dispatching network-vif-unplugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  8 06:20:18 np0005475493 nova_compute[262220]: 2025-10-08 10:20:18.022 2 WARNING nova.compute.manager [req-81f9c2e6-39a7-4e48-b277-c32135c4ae07 req-648e11aa-fd03-4fd8-869d-f3ac2947b6df 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received unexpected event network-vif-unplugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 for instance with vm_state active and task_state None.#033[00m
Oct  8 06:20:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:20:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:18.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:20:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:20:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:20:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:20:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:20:18 np0005475493 nova_compute[262220]: 2025-10-08 10:20:18.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:18.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:20:18 np0005475493 nova_compute[262220]: 2025-10-08 10:20:18.960 2 INFO nova.compute.manager [None req-daf7a5dd-7ec8-485f-bb94-4a0835f57953 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Get console output#033[00m
Oct  8 06:20:18 np0005475493 nova_compute[262220]: 2025-10-08 10:20:18.965 631 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  8 06:20:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:20:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:20:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:20:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:20:19 np0005475493 nova_compute[262220]: 2025-10-08 10:20:19.069 2 DEBUG nova.network.neutron [req-b652370b-0fa0-4c89-99db-a3b956f1a813 req-6c5e239c-d2da-48e9-92b6-6818ac9de2aa 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updated VIF entry in instance network info cache for port 29abf06b-1e1a-46cb-9cc1-7fa777795883. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  8 06:20:19 np0005475493 nova_compute[262220]: 2025-10-08 10:20:19.070 2 DEBUG nova.network.neutron [req-b652370b-0fa0-4c89-99db-a3b956f1a813 req-6c5e239c-d2da-48e9-92b6-6818ac9de2aa 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updating instance_info_cache with network_info: [{"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:20:19 np0005475493 nova_compute[262220]: 2025-10-08 10:20:19.086 2 DEBUG oslo_concurrency.lockutils [req-b652370b-0fa0-4c89-99db-a3b956f1a813 req-6c5e239c-d2da-48e9-92b6-6818ac9de2aa 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:20:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:19.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:19 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1072: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct  8 06:20:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:20:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:20:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:20.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:20:20 np0005475493 nova_compute[262220]: 2025-10-08 10:20:20.110 2 DEBUG nova.compute.manager [req-7294a379-e189-4365-8c0a-d5f0f0a537ef req-5a622126-858a-4b7e-b085-d2122ec1ca78 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:20:20 np0005475493 nova_compute[262220]: 2025-10-08 10:20:20.111 2 DEBUG oslo_concurrency.lockutils [req-7294a379-e189-4365-8c0a-d5f0f0a537ef req-5a622126-858a-4b7e-b085-d2122ec1ca78 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:20:20 np0005475493 nova_compute[262220]: 2025-10-08 10:20:20.111 2 DEBUG oslo_concurrency.lockutils [req-7294a379-e189-4365-8c0a-d5f0f0a537ef req-5a622126-858a-4b7e-b085-d2122ec1ca78 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:20:20 np0005475493 nova_compute[262220]: 2025-10-08 10:20:20.111 2 DEBUG oslo_concurrency.lockutils [req-7294a379-e189-4365-8c0a-d5f0f0a537ef req-5a622126-858a-4b7e-b085-d2122ec1ca78 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:20:20 np0005475493 nova_compute[262220]: 2025-10-08 10:20:20.112 2 DEBUG nova.compute.manager [req-7294a379-e189-4365-8c0a-d5f0f0a537ef req-5a622126-858a-4b7e-b085-d2122ec1ca78 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] No waiting events found dispatching network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  8 06:20:20 np0005475493 nova_compute[262220]: 2025-10-08 10:20:20.112 2 WARNING nova.compute.manager [req-7294a379-e189-4365-8c0a-d5f0f0a537ef req-5a622126-858a-4b7e-b085-d2122ec1ca78 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received unexpected event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 for instance with vm_state active and task_state None.#033[00m
Oct  8 06:20:20 np0005475493 nova_compute[262220]: 2025-10-08 10:20:20.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:20:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:21.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:20:21 np0005475493 nova_compute[262220]: 2025-10-08 10:20:21.104 2 INFO nova.compute.manager [None req-01e54040-c2ee-4c9a-ba77-327543b8aaf4 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Get console output#033[00m
Oct  8 06:20:21 np0005475493 nova_compute[262220]: 2025-10-08 10:20:21.107 631 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  8 06:20:21 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1073: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 107 KiB/s wr, 19 op/s
Oct  8 06:20:21 np0005475493 podman[282384]: 2025-10-08 10:20:21.96524768 +0000 UTC m=+0.122485046 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  8 06:20:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:22.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:22 np0005475493 nova_compute[262220]: 2025-10-08 10:20:22.240 2 DEBUG nova.compute.manager [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-changed-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:20:22 np0005475493 nova_compute[262220]: 2025-10-08 10:20:22.241 2 DEBUG nova.compute.manager [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Refreshing instance network info cache due to event network-changed-29abf06b-1e1a-46cb-9cc1-7fa777795883. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  8 06:20:22 np0005475493 nova_compute[262220]: 2025-10-08 10:20:22.241 2 DEBUG oslo_concurrency.lockutils [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:20:22 np0005475493 nova_compute[262220]: 2025-10-08 10:20:22.242 2 DEBUG oslo_concurrency.lockutils [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:20:22 np0005475493 nova_compute[262220]: 2025-10-08 10:20:22.242 2 DEBUG nova.network.neutron [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Refreshing network info cache for port 29abf06b-1e1a-46cb-9cc1-7fa777795883 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  8 06:20:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:20:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:23.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:20:23 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1074: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 107 KiB/s wr, 19 op/s
Oct  8 06:20:23 np0005475493 nova_compute[262220]: 2025-10-08 10:20:23.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:23 np0005475493 nova_compute[262220]: 2025-10-08 10:20:23.608 2 DEBUG nova.network.neutron [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updated VIF entry in instance network info cache for port 29abf06b-1e1a-46cb-9cc1-7fa777795883. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  8 06:20:23 np0005475493 nova_compute[262220]: 2025-10-08 10:20:23.609 2 DEBUG nova.network.neutron [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updating instance_info_cache with network_info: [{"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:20:23 np0005475493 nova_compute[262220]: 2025-10-08 10:20:23.637 2 DEBUG oslo_concurrency.lockutils [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:20:23 np0005475493 nova_compute[262220]: 2025-10-08 10:20:23.638 2 DEBUG nova.compute.manager [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:20:23 np0005475493 nova_compute[262220]: 2025-10-08 10:20:23.638 2 DEBUG oslo_concurrency.lockutils [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:20:23 np0005475493 nova_compute[262220]: 2025-10-08 10:20:23.638 2 DEBUG oslo_concurrency.lockutils [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:20:23 np0005475493 nova_compute[262220]: 2025-10-08 10:20:23.638 2 DEBUG oslo_concurrency.lockutils [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:20:23 np0005475493 nova_compute[262220]: 2025-10-08 10:20:23.638 2 DEBUG nova.compute.manager [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] No waiting events found dispatching network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  8 06:20:23 np0005475493 nova_compute[262220]: 2025-10-08 10:20:23.639 2 WARNING nova.compute.manager [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received unexpected event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 for instance with vm_state active and task_state None.#033[00m
Oct  8 06:20:23 np0005475493 nova_compute[262220]: 2025-10-08 10:20:23.639 2 DEBUG nova.compute.manager [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:20:23 np0005475493 nova_compute[262220]: 2025-10-08 10:20:23.639 2 DEBUG oslo_concurrency.lockutils [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:20:23 np0005475493 nova_compute[262220]: 2025-10-08 10:20:23.639 2 DEBUG oslo_concurrency.lockutils [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:20:23 np0005475493 nova_compute[262220]: 2025-10-08 10:20:23.639 2 DEBUG oslo_concurrency.lockutils [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:20:23 np0005475493 nova_compute[262220]: 2025-10-08 10:20:23.640 2 DEBUG nova.compute.manager [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] No waiting events found dispatching network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  8 06:20:23 np0005475493 nova_compute[262220]: 2025-10-08 10:20:23.640 2 WARNING nova.compute.manager [req-f9d9c56f-fa48-4bb2-bbf5-a9f1705a1bd0 req-7185fd02-4e5c-4236-ad01-4de2fbe3ef10 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received unexpected event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 for instance with vm_state active and task_state None.#033[00m
Oct  8 06:20:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:20:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:20:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:20:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:20:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:20:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:24.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:20:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:20:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:25.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:25 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1075: 353 pgs: 353 active+clean; 121 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 115 KiB/s wr, 48 op/s
Oct  8 06:20:25 np0005475493 nova_compute[262220]: 2025-10-08 10:20:25.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:20:25] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct  8 06:20:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:20:25] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct  8 06:20:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:26.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.063 2 DEBUG nova.compute.manager [req-d2768907-b7a3-4388-aa89-8465df05fc40 req-9a5d7aa7-f224-48c0-813b-c96c74dd3a83 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-changed-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.064 2 DEBUG nova.compute.manager [req-d2768907-b7a3-4388-aa89-8465df05fc40 req-9a5d7aa7-f224-48c0-813b-c96c74dd3a83 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Refreshing instance network info cache due to event network-changed-29abf06b-1e1a-46cb-9cc1-7fa777795883. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.064 2 DEBUG oslo_concurrency.lockutils [req-d2768907-b7a3-4388-aa89-8465df05fc40 req-9a5d7aa7-f224-48c0-813b-c96c74dd3a83 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.064 2 DEBUG oslo_concurrency.lockutils [req-d2768907-b7a3-4388-aa89-8465df05fc40 req-9a5d7aa7-f224-48c0-813b-c96c74dd3a83 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.065 2 DEBUG nova.network.neutron [req-d2768907-b7a3-4388-aa89-8465df05fc40 req-9a5d7aa7-f224-48c0-813b-c96c74dd3a83 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Refreshing network info cache for port 29abf06b-1e1a-46cb-9cc1-7fa777795883 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  8 06:20:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:27.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.159 2 DEBUG oslo_concurrency.lockutils [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.160 2 DEBUG oslo_concurrency.lockutils [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.160 2 DEBUG oslo_concurrency.lockutils [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.161 2 DEBUG oslo_concurrency.lockutils [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.161 2 DEBUG oslo_concurrency.lockutils [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.163 2 INFO nova.compute.manager [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Terminating instance#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.164 2 DEBUG nova.compute.manager [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  8 06:20:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:27.192Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:20:27 np0005475493 kernel: tap29abf06b-1e (unregistering): left promiscuous mode
Oct  8 06:20:27 np0005475493 NetworkManager[44872]: <info>  [1759918827.2178] device (tap29abf06b-1e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  8 06:20:27 np0005475493 ovn_controller[153187]: 2025-10-08T10:20:27Z|00065|binding|INFO|Releasing lport 29abf06b-1e1a-46cb-9cc1-7fa777795883 from this chassis (sb_readonly=0)
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:27 np0005475493 ovn_controller[153187]: 2025-10-08T10:20:27Z|00066|binding|INFO|Setting lport 29abf06b-1e1a-46cb-9cc1-7fa777795883 down in Southbound
Oct  8 06:20:27 np0005475493 ovn_controller[153187]: 2025-10-08T10:20:27Z|00067|binding|INFO|Removing iface tap29abf06b-1e ovn-installed in OVS
Oct  8 06:20:27 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.233 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:0d:2d 10.100.0.8'], port_security=['fa:16:3e:00:0d:2d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '7d19d2c6-6de1-4096-99e4-24b4265b9c09', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c18c7476-aaa8-4977-81b5-fb17e88446e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'neutron:revision_number': '8', 'neutron:security_group_ids': '19e068da-96ae-4c4d-8c61-2ea91c3392b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ff1baa8-ffa0-48d3-9c93-32e63e4450d8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], logical_port=29abf06b-1e1a-46cb-9cc1-7fa777795883) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  8 06:20:27 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.234 163175 INFO neutron.agent.ovn.metadata.agent [-] Port 29abf06b-1e1a-46cb-9cc1-7fa777795883 in datapath c18c7476-aaa8-4977-81b5-fb17e88446e2 unbound from our chassis#033[00m
Oct  8 06:20:27 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.235 163175 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c18c7476-aaa8-4977-81b5-fb17e88446e2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  8 06:20:27 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.236 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[fe8b8c7a-f4a0-4a83-9aba-0ec67089aafe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:20:27 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.237 163175 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2 namespace which is not needed anymore#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:27 np0005475493 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Oct  8 06:20:27 np0005475493 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d0000000b.scope: Consumed 14.435s CPU time.
Oct  8 06:20:27 np0005475493 systemd-machined[216030]: Machine qemu-3-instance-0000000b terminated.
Oct  8 06:20:27 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1076: 353 pgs: 353 active+clean; 121 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 20 KiB/s wr, 30 op/s
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:27 np0005475493 neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2[281434]: [NOTICE]   (281438) : haproxy version is 2.8.14-c23fe91
Oct  8 06:20:27 np0005475493 neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2[281434]: [NOTICE]   (281438) : path to executable is /usr/sbin/haproxy
Oct  8 06:20:27 np0005475493 neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2[281434]: [WARNING]  (281438) : Exiting Master process...
Oct  8 06:20:27 np0005475493 neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2[281434]: [ALERT]    (281438) : Current worker (281441) exited with code 143 (Terminated)
Oct  8 06:20:27 np0005475493 neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2[281434]: [WARNING]  (281438) : All workers exited. Exiting... (0)
Oct  8 06:20:27 np0005475493 systemd[1]: libpod-7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce.scope: Deactivated successfully.
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.407 2 INFO nova.virt.libvirt.driver [-] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Instance destroyed successfully.#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.407 2 DEBUG nova.objects.instance [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'resources' on Instance uuid 7d19d2c6-6de1-4096-99e4-24b4265b9c09 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  8 06:20:27 np0005475493 podman[282443]: 2025-10-08 10:20:27.409804259 +0000 UTC m=+0.074467685 container died 7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.411 2 DEBUG nova.compute.manager [req-1965d4c1-f1f9-476a-afe1-3645972e0680 req-51388a12-8dc3-441a-abdc-25405c00f300 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-vif-unplugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.411 2 DEBUG oslo_concurrency.lockutils [req-1965d4c1-f1f9-476a-afe1-3645972e0680 req-51388a12-8dc3-441a-abdc-25405c00f300 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.411 2 DEBUG oslo_concurrency.lockutils [req-1965d4c1-f1f9-476a-afe1-3645972e0680 req-51388a12-8dc3-441a-abdc-25405c00f300 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.411 2 DEBUG oslo_concurrency.lockutils [req-1965d4c1-f1f9-476a-afe1-3645972e0680 req-51388a12-8dc3-441a-abdc-25405c00f300 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.411 2 DEBUG nova.compute.manager [req-1965d4c1-f1f9-476a-afe1-3645972e0680 req-51388a12-8dc3-441a-abdc-25405c00f300 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] No waiting events found dispatching network-vif-unplugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.412 2 DEBUG nova.compute.manager [req-1965d4c1-f1f9-476a-afe1-3645972e0680 req-51388a12-8dc3-441a-abdc-25405c00f300 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-vif-unplugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.426 2 DEBUG nova.virt.libvirt.vif [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-08T10:19:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1442491120',display_name='tempest-TestNetworkBasicOps-server-1442491120',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1442491120',id=11,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA5zqA1Qj/FXMxdyzpBTW0ZXp5DxknDQcIVK3ARN25T6VayPziIvkKCLWAtPemraMv4byPsH7lpRR4PeiITQ6eibmU22T/5fhhxWj1Ai2d949LVQyVHFvTo1rGRRAeVdbw==',key_name='tempest-TestNetworkBasicOps-1126023314',keypairs=<?>,launch_index=0,launched_at=2025-10-08T10:19:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-zjf5kwx6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T10:19:38Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=7d19d2c6-6de1-4096-99e4-24b4265b9c09,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.426 2 DEBUG nova.network.os_vif_util [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.427 2 DEBUG nova.network.os_vif_util [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:00:0d:2d,bridge_name='br-int',has_traffic_filtering=True,id=29abf06b-1e1a-46cb-9cc1-7fa777795883,network=Network(c18c7476-aaa8-4977-81b5-fb17e88446e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29abf06b-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.427 2 DEBUG os_vif [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:0d:2d,bridge_name='br-int',has_traffic_filtering=True,id=29abf06b-1e1a-46cb-9cc1-7fa777795883,network=Network(c18c7476-aaa8-4977-81b5-fb17e88446e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29abf06b-1e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.429 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29abf06b-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.437 2 INFO os_vif [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:0d:2d,bridge_name='br-int',has_traffic_filtering=True,id=29abf06b-1e1a-46cb-9cc1-7fa777795883,network=Network(c18c7476-aaa8-4977-81b5-fb17e88446e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29abf06b-1e')#033[00m
Oct  8 06:20:27 np0005475493 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce-userdata-shm.mount: Deactivated successfully.
Oct  8 06:20:27 np0005475493 systemd[1]: var-lib-containers-storage-overlay-669b7976e7c613a7666c66b557e5e70955b0380381cfc69b3da6fa8e03ce9e5e-merged.mount: Deactivated successfully.
Oct  8 06:20:27 np0005475493 podman[282443]: 2025-10-08 10:20:27.460182696 +0000 UTC m=+0.124846132 container cleanup 7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  8 06:20:27 np0005475493 systemd[1]: libpod-conmon-7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce.scope: Deactivated successfully.
Oct  8 06:20:27 np0005475493 podman[282494]: 2025-10-08 10:20:27.525359601 +0000 UTC m=+0.045113838 container remove 7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  8 06:20:27 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.533 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[c8782e29-0356-470b-a1b5-46bb3cc7f8d3]: (4, ('Wed Oct  8 10:20:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2 (7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce)\n7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce\nWed Oct  8 10:20:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2 (7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce)\n7c14962a86f46dbbdab05db1187661162c285fa5905925d5090b257adb980bce\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:20:27 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.534 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[59d5ecb5-e7e0-4802-9962-b13c7aa6c870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:20:27 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.536 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc18c7476-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:27 np0005475493 kernel: tapc18c7476-a0: left promiscuous mode
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:27 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.555 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[cc2924e1-6223-44d3-b044-e0af098fad5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:20:27 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.585 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[fb36f929-910d-4191-b441-9e7a087cadff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:20:27 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.586 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[f4839136-33bc-41fb-a15d-7558ac29da04]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:20:27 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.602 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[83e6d160-a2ea-4dfb-aecc-b32873148039]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473563, 'reachable_time': 16957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282534, 'error': None, 'target': 'ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:20:27 np0005475493 systemd[1]: run-netns-ovnmeta\x2dc18c7476\x2daaa8\x2d4977\x2d81b5\x2dfb17e88446e2.mount: Deactivated successfully.
Oct  8 06:20:27 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.608 163290 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c18c7476-aaa8-4977-81b5-fb17e88446e2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  8 06:20:27 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:27.608 163290 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f94e5d-5710-425f-88f8-1e7a5966a985]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:20:27 np0005475493 podman[282513]: 2025-10-08 10:20:27.627927373 +0000 UTC m=+0.055120651 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  8 06:20:27 np0005475493 podman[282512]: 2025-10-08 10:20:27.627959434 +0000 UTC m=+0.057151506 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.927 2 INFO nova.virt.libvirt.driver [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Deleting instance files /var/lib/nova/instances/7d19d2c6-6de1-4096-99e4-24b4265b9c09_del#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.927 2 INFO nova.virt.libvirt.driver [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Deletion of /var/lib/nova/instances/7d19d2c6-6de1-4096-99e4-24b4265b9c09_del complete#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.989 2 INFO nova.compute.manager [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.991 2 DEBUG oslo.service.loopingcall [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.992 2 DEBUG nova.compute.manager [-] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  8 06:20:27 np0005475493 nova_compute[262220]: 2025-10-08 10:20:27.992 2 DEBUG nova.network.neutron [-] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  8 06:20:28 np0005475493 nova_compute[262220]: 2025-10-08 10:20:28.054 2 DEBUG nova.network.neutron [req-d2768907-b7a3-4388-aa89-8465df05fc40 req-9a5d7aa7-f224-48c0-813b-c96c74dd3a83 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updated VIF entry in instance network info cache for port 29abf06b-1e1a-46cb-9cc1-7fa777795883. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  8 06:20:28 np0005475493 nova_compute[262220]: 2025-10-08 10:20:28.055 2 DEBUG nova.network.neutron [req-d2768907-b7a3-4388-aa89-8465df05fc40 req-9a5d7aa7-f224-48c0-813b-c96c74dd3a83 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updating instance_info_cache with network_info: [{"id": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "address": "fa:16:3e:00:0d:2d", "network": {"id": "c18c7476-aaa8-4977-81b5-fb17e88446e2", "bridge": "br-int", "label": "tempest-network-smoke--1578716951", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29abf06b-1e", "ovs_interfaceid": "29abf06b-1e1a-46cb-9cc1-7fa777795883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:20:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:28.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:28 np0005475493 nova_compute[262220]: 2025-10-08 10:20:28.131 2 DEBUG oslo_concurrency.lockutils [req-d2768907-b7a3-4388-aa89-8465df05fc40 req-9a5d7aa7-f224-48c0-813b-c96c74dd3a83 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-7d19d2c6-6de1-4096-99e4-24b4265b9c09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:20:28 np0005475493 nova_compute[262220]: 2025-10-08 10:20:28.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:28.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:20:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:20:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:20:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:20:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:20:29 np0005475493 nova_compute[262220]: 2025-10-08 10:20:29.061 2 DEBUG nova.network.neutron [-] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:20:29 np0005475493 nova_compute[262220]: 2025-10-08 10:20:29.078 2 INFO nova.compute.manager [-] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Took 1.09 seconds to deallocate network for instance.#033[00m
Oct  8 06:20:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:20:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:29.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:20:29 np0005475493 nova_compute[262220]: 2025-10-08 10:20:29.136 2 DEBUG oslo_concurrency.lockutils [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:20:29 np0005475493 nova_compute[262220]: 2025-10-08 10:20:29.136 2 DEBUG oslo_concurrency.lockutils [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:20:29 np0005475493 nova_compute[262220]: 2025-10-08 10:20:29.157 2 DEBUG nova.compute.manager [req-09d8b5dc-3cb3-404f-8808-b8c3e57bc3f6 req-fc96b914-cbfe-42d3-8ee6-89fa9919a85f 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-vif-deleted-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:20:29 np0005475493 nova_compute[262220]: 2025-10-08 10:20:29.180 2 DEBUG oslo_concurrency.processutils [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:20:29 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1077: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 21 KiB/s wr, 58 op/s
Oct  8 06:20:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:20:29 np0005475493 nova_compute[262220]: 2025-10-08 10:20:29.506 2 DEBUG nova.compute.manager [req-6aab1618-a591-42aa-8505-af4af28e607b req-60990b22-523f-44e9-8258-9ca8c473cfe6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:20:29 np0005475493 nova_compute[262220]: 2025-10-08 10:20:29.507 2 DEBUG oslo_concurrency.lockutils [req-6aab1618-a591-42aa-8505-af4af28e607b req-60990b22-523f-44e9-8258-9ca8c473cfe6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:20:29 np0005475493 nova_compute[262220]: 2025-10-08 10:20:29.507 2 DEBUG oslo_concurrency.lockutils [req-6aab1618-a591-42aa-8505-af4af28e607b req-60990b22-523f-44e9-8258-9ca8c473cfe6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:20:29 np0005475493 nova_compute[262220]: 2025-10-08 10:20:29.508 2 DEBUG oslo_concurrency.lockutils [req-6aab1618-a591-42aa-8505-af4af28e607b req-60990b22-523f-44e9-8258-9ca8c473cfe6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:20:29 np0005475493 nova_compute[262220]: 2025-10-08 10:20:29.508 2 DEBUG nova.compute.manager [req-6aab1618-a591-42aa-8505-af4af28e607b req-60990b22-523f-44e9-8258-9ca8c473cfe6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] No waiting events found dispatching network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  8 06:20:29 np0005475493 nova_compute[262220]: 2025-10-08 10:20:29.509 2 WARNING nova.compute.manager [req-6aab1618-a591-42aa-8505-af4af28e607b req-60990b22-523f-44e9-8258-9ca8c473cfe6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Received unexpected event network-vif-plugged-29abf06b-1e1a-46cb-9cc1-7fa777795883 for instance with vm_state deleted and task_state None.#033[00m
Oct  8 06:20:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:20:29 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/287948722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:20:29 np0005475493 nova_compute[262220]: 2025-10-08 10:20:29.678 2 DEBUG oslo_concurrency.processutils [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:20:29 np0005475493 nova_compute[262220]: 2025-10-08 10:20:29.685 2 DEBUG nova.compute.provider_tree [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:20:29 np0005475493 nova_compute[262220]: 2025-10-08 10:20:29.705 2 DEBUG nova.scheduler.client.report [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:20:29 np0005475493 nova_compute[262220]: 2025-10-08 10:20:29.758 2 DEBUG oslo_concurrency.lockutils [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:20:29 np0005475493 nova_compute[262220]: 2025-10-08 10:20:29.820 2 INFO nova.scheduler.client.report [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Deleted allocations for instance 7d19d2c6-6de1-4096-99e4-24b4265b9c09#033[00m
Oct  8 06:20:29 np0005475493 nova_compute[262220]: 2025-10-08 10:20:29.890 2 DEBUG oslo_concurrency.lockutils [None req-7e45c488-eafb-4589-be0d-fcc6c5026fca d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "7d19d2c6-6de1-4096-99e4-24b4265b9c09" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:20:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:30.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:20:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:31.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:20:31 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1078: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 9.0 KiB/s wr, 57 op/s
Oct  8 06:20:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:32.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:32 np0005475493 nova_compute[262220]: 2025-10-08 10:20:32.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:20:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:20:32 np0005475493 nova_compute[262220]: 2025-10-08 10:20:32.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:33 np0005475493 nova_compute[262220]: 2025-10-08 10:20:33.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:20:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:33.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:20:33 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1079: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 9.0 KiB/s wr, 57 op/s
Oct  8 06:20:33 np0005475493 nova_compute[262220]: 2025-10-08 10:20:33.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:20:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:20:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:20:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:20:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:20:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:34.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:20:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:20:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:35.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:35 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1080: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 9.0 KiB/s wr, 57 op/s
Oct  8 06:20:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:20:35] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:20:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:20:35] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:20:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:36.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:20:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:37.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:20:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:37.193Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:20:37 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1081: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct  8 06:20:37 np0005475493 nova_compute[262220]: 2025-10-08 10:20:37.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:38.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:38 np0005475493 nova_compute[262220]: 2025-10-08 10:20:38.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:38.848Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:20:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:38.849Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:20:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:38.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:20:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:20:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:20:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:20:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:20:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:39.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:39 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1082: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct  8 06:20:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:20:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:40.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:20:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:41.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:20:41 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1083: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:20:41 np0005475493 podman[282616]: 2025-10-08 10:20:41.93161239 +0000 UTC m=+0.089055687 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid)
Oct  8 06:20:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:42.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:42 np0005475493 nova_compute[262220]: 2025-10-08 10:20:42.406 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759918827.4045172, 7d19d2c6-6de1-4096-99e4-24b4265b9c09 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  8 06:20:42 np0005475493 nova_compute[262220]: 2025-10-08 10:20:42.406 2 INFO nova.compute.manager [-] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] VM Stopped (Lifecycle Event)#033[00m
Oct  8 06:20:42 np0005475493 nova_compute[262220]: 2025-10-08 10:20:42.426 2 DEBUG nova.compute.manager [None req-f4ce2563-ce14-4ff3-98d9-7722612ae4fa - - - - - -] [instance: 7d19d2c6-6de1-4096-99e4-24b4265b9c09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  8 06:20:42 np0005475493 nova_compute[262220]: 2025-10-08 10:20:42.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:20:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:43.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:20:43 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1084: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:20:43 np0005475493 nova_compute[262220]: 2025-10-08 10:20:43.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:20:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:20:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:20:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:20:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:20:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:44.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:20:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:20:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:20:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:45.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:20:45 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1085: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:20:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:20:45] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:20:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:20:45] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:20:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:46.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:46 np0005475493 nova_compute[262220]: 2025-10-08 10:20:46.882 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:20:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:47.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:47.194Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:20:47 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1086: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:20:47 np0005475493 nova_compute[262220]: 2025-10-08 10:20:47.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:20:47
Oct  8 06:20:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:20:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:20:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', '.mgr', 'vms', 'volumes', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', '.nfs', 'cephfs.cephfs.meta', 'images', 'default.rgw.log']
Oct  8 06:20:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:20:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:20:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:20:47 np0005475493 nova_compute[262220]: 2025-10-08 10:20:47.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:20:47 np0005475493 nova_compute[262220]: 2025-10-08 10:20:47.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:20:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:20:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:20:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:20:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:48.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:20:48 np0005475493 nova_compute[262220]: 2025-10-08 10:20:48.290 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "20ffb86b-b5ba-4818-82e4-14a755c48807" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:20:48 np0005475493 nova_compute[262220]: 2025-10-08 10:20:48.290 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:20:48 np0005475493 nova_compute[262220]: 2025-10-08 10:20:48.304 2 DEBUG nova.compute.manager [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:20:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:20:48 np0005475493 nova_compute[262220]: 2025-10-08 10:20:48.365 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:20:48 np0005475493 nova_compute[262220]: 2025-10-08 10:20:48.366 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:20:48 np0005475493 nova_compute[262220]: 2025-10-08 10:20:48.371 2 DEBUG nova.virt.hardware [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  8 06:20:48 np0005475493 nova_compute[262220]: 2025-10-08 10:20:48.371 2 INFO nova.compute.claims [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  8 06:20:48 np0005475493 nova_compute[262220]: 2025-10-08 10:20:48.462 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:20:48 np0005475493 nova_compute[262220]: 2025-10-08 10:20:48.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:48.850Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:20:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:48.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:20:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:20:48 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4234880885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:20:48 np0005475493 nova_compute[262220]: 2025-10-08 10:20:48.923 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:20:48 np0005475493 nova_compute[262220]: 2025-10-08 10:20:48.928 2 DEBUG nova.compute.provider_tree [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:20:48 np0005475493 nova_compute[262220]: 2025-10-08 10:20:48.977 2 DEBUG nova.scheduler.client.report [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:20:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:20:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:20:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:20:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:20:49 np0005475493 nova_compute[262220]: 2025-10-08 10:20:49.047 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:20:49 np0005475493 nova_compute[262220]: 2025-10-08 10:20:49.047 2 DEBUG nova.compute.manager [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  8 06:20:49 np0005475493 nova_compute[262220]: 2025-10-08 10:20:49.122 2 DEBUG nova.compute.manager [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  8 06:20:49 np0005475493 nova_compute[262220]: 2025-10-08 10:20:49.123 2 DEBUG nova.network.neutron [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  8 06:20:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:20:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:49.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:20:49 np0005475493 nova_compute[262220]: 2025-10-08 10:20:49.159 2 INFO nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  8 06:20:49 np0005475493 nova_compute[262220]: 2025-10-08 10:20:49.181 2 DEBUG nova.compute.manager [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  8 06:20:49 np0005475493 nova_compute[262220]: 2025-10-08 10:20:49.283 2 DEBUG nova.policy [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd50b19166a7245e390a6e29682191263', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  8 06:20:49 np0005475493 nova_compute[262220]: 2025-10-08 10:20:49.288 2 DEBUG nova.compute.manager [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  8 06:20:49 np0005475493 nova_compute[262220]: 2025-10-08 10:20:49.289 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  8 06:20:49 np0005475493 nova_compute[262220]: 2025-10-08 10:20:49.290 2 INFO nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Creating image(s)#033[00m
Oct  8 06:20:49 np0005475493 nova_compute[262220]: 2025-10-08 10:20:49.320 2 DEBUG nova.storage.rbd_utils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 20ffb86b-b5ba-4818-82e4-14a755c48807_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:20:49 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1087: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:20:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:20:49 np0005475493 nova_compute[262220]: 2025-10-08 10:20:49.348 2 DEBUG nova.storage.rbd_utils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 20ffb86b-b5ba-4818-82e4-14a755c48807_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:20:49 np0005475493 nova_compute[262220]: 2025-10-08 10:20:49.379 2 DEBUG nova.storage.rbd_utils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 20ffb86b-b5ba-4818-82e4-14a755c48807_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:20:49 np0005475493 nova_compute[262220]: 2025-10-08 10:20:49.384 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:20:49 np0005475493 nova_compute[262220]: 2025-10-08 10:20:49.451 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:20:49 np0005475493 nova_compute[262220]: 2025-10-08 10:20:49.452 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "3cde70359534d4758cf71011630bd1fb14a90c92" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:20:49 np0005475493 nova_compute[262220]: 2025-10-08 10:20:49.453 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "3cde70359534d4758cf71011630bd1fb14a90c92" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:20:49 np0005475493 nova_compute[262220]: 2025-10-08 10:20:49.453 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "3cde70359534d4758cf71011630bd1fb14a90c92" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:20:49 np0005475493 nova_compute[262220]: 2025-10-08 10:20:49.482 2 DEBUG nova.storage.rbd_utils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 20ffb86b-b5ba-4818-82e4-14a755c48807_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:20:49 np0005475493 nova_compute[262220]: 2025-10-08 10:20:49.486 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 20ffb86b-b5ba-4818-82e4-14a755c48807_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:20:49 np0005475493 nova_compute[262220]: 2025-10-08 10:20:49.788 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3cde70359534d4758cf71011630bd1fb14a90c92 20ffb86b-b5ba-4818-82e4-14a755c48807_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:20:49 np0005475493 nova_compute[262220]: 2025-10-08 10:20:49.870 2 DEBUG nova.storage.rbd_utils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] resizing rbd image 20ffb86b-b5ba-4818-82e4-14a755c48807_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  8 06:20:49 np0005475493 nova_compute[262220]: 2025-10-08 10:20:49.904 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:20:49 np0005475493 nova_compute[262220]: 2025-10-08 10:20:49.985 2 DEBUG nova.objects.instance [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'migration_context' on Instance uuid 20ffb86b-b5ba-4818-82e4-14a755c48807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  8 06:20:50 np0005475493 nova_compute[262220]: 2025-10-08 10:20:50.025 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  8 06:20:50 np0005475493 nova_compute[262220]: 2025-10-08 10:20:50.025 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Ensure instance console log exists: /var/lib/nova/instances/20ffb86b-b5ba-4818-82e4-14a755c48807/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  8 06:20:50 np0005475493 nova_compute[262220]: 2025-10-08 10:20:50.026 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:20:50 np0005475493 nova_compute[262220]: 2025-10-08 10:20:50.026 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:20:50 np0005475493 nova_compute[262220]: 2025-10-08 10:20:50.026 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:20:50 np0005475493 nova_compute[262220]: 2025-10-08 10:20:50.068 2 DEBUG nova.network.neutron [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Successfully created port: 754d5578-d995-4502-af66-b164dfdf1189 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  8 06:20:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:50.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:50 np0005475493 nova_compute[262220]: 2025-10-08 10:20:50.882 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:20:50 np0005475493 nova_compute[262220]: 2025-10-08 10:20:50.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:20:50 np0005475493 nova_compute[262220]: 2025-10-08 10:20:50.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  8 06:20:50 np0005475493 nova_compute[262220]: 2025-10-08 10:20:50.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  8 06:20:50 np0005475493 nova_compute[262220]: 2025-10-08 10:20:50.912 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  8 06:20:50 np0005475493 nova_compute[262220]: 2025-10-08 10:20:50.913 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  8 06:20:50 np0005475493 nova_compute[262220]: 2025-10-08 10:20:50.913 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:20:50 np0005475493 nova_compute[262220]: 2025-10-08 10:20:50.913 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:20:50 np0005475493 nova_compute[262220]: 2025-10-08 10:20:50.959 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:20:50 np0005475493 nova_compute[262220]: 2025-10-08 10:20:50.960 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:20:50 np0005475493 nova_compute[262220]: 2025-10-08 10:20:50.960 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:20:50 np0005475493 nova_compute[262220]: 2025-10-08 10:20:50.961 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:20:50 np0005475493 nova_compute[262220]: 2025-10-08 10:20:50.961 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:20:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:20:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:51.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:20:51 np0005475493 nova_compute[262220]: 2025-10-08 10:20:51.152 2 DEBUG nova.network.neutron [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Successfully updated port: 754d5578-d995-4502-af66-b164dfdf1189 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  8 06:20:51 np0005475493 nova_compute[262220]: 2025-10-08 10:20:51.173 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "refresh_cache-20ffb86b-b5ba-4818-82e4-14a755c48807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:20:51 np0005475493 nova_compute[262220]: 2025-10-08 10:20:51.174 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquired lock "refresh_cache-20ffb86b-b5ba-4818-82e4-14a755c48807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:20:51 np0005475493 nova_compute[262220]: 2025-10-08 10:20:51.174 2 DEBUG nova.network.neutron [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  8 06:20:51 np0005475493 nova_compute[262220]: 2025-10-08 10:20:51.247 2 DEBUG nova.compute.manager [req-9e28075a-3aa8-48c9-9d35-5095a1b2805b req-d89bb926-5361-48c4-b3f2-e665275fa2b6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Received event network-changed-754d5578-d995-4502-af66-b164dfdf1189 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:20:51 np0005475493 nova_compute[262220]: 2025-10-08 10:20:51.248 2 DEBUG nova.compute.manager [req-9e28075a-3aa8-48c9-9d35-5095a1b2805b req-d89bb926-5361-48c4-b3f2-e665275fa2b6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Refreshing instance network info cache due to event network-changed-754d5578-d995-4502-af66-b164dfdf1189. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  8 06:20:51 np0005475493 nova_compute[262220]: 2025-10-08 10:20:51.248 2 DEBUG oslo_concurrency.lockutils [req-9e28075a-3aa8-48c9-9d35-5095a1b2805b req-d89bb926-5361-48c4-b3f2-e665275fa2b6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-20ffb86b-b5ba-4818-82e4-14a755c48807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:20:51 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1088: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:20:51 np0005475493 nova_compute[262220]: 2025-10-08 10:20:51.360 2 DEBUG nova.network.neutron [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  8 06:20:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:20:51 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2859324008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:20:51 np0005475493 nova_compute[262220]: 2025-10-08 10:20:51.406 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:20:51 np0005475493 nova_compute[262220]: 2025-10-08 10:20:51.582 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:20:51 np0005475493 nova_compute[262220]: 2025-10-08 10:20:51.583 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4536MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:20:51 np0005475493 nova_compute[262220]: 2025-10-08 10:20:51.584 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:20:51 np0005475493 nova_compute[262220]: 2025-10-08 10:20:51.584 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:20:51 np0005475493 nova_compute[262220]: 2025-10-08 10:20:51.677 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Instance 20ffb86b-b5ba-4818-82e4-14a755c48807 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  8 06:20:51 np0005475493 nova_compute[262220]: 2025-10-08 10:20:51.678 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:20:51 np0005475493 nova_compute[262220]: 2025-10-08 10:20:51.678 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:20:51 np0005475493 nova_compute[262220]: 2025-10-08 10:20:51.705 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:20:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:52.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:20:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2913649116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:20:52 np0005475493 nova_compute[262220]: 2025-10-08 10:20:52.180 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:20:52 np0005475493 nova_compute[262220]: 2025-10-08 10:20:52.188 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:20:52 np0005475493 nova_compute[262220]: 2025-10-08 10:20:52.217 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:20:52 np0005475493 nova_compute[262220]: 2025-10-08 10:20:52.370 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:20:52 np0005475493 nova_compute[262220]: 2025-10-08 10:20:52.370 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:20:52 np0005475493 nova_compute[262220]: 2025-10-08 10:20:52.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:52 np0005475493 podman[282906]: 2025-10-08 10:20:52.920243651 +0000 UTC m=+0.081030497 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Oct  8 06:20:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:53.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:53 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1089: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.344 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.344 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.679 2 DEBUG nova.network.neutron [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Updating instance_info_cache with network_info: [{"id": "754d5578-d995-4502-af66-b164dfdf1189", "address": "fa:16:3e:5d:e9:f4", "network": {"id": "84428682-9eff-4658-a105-8c0d1de9c87f", "bridge": "br-int", "label": "tempest-network-smoke--1945963860", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap754d5578-d9", "ovs_interfaceid": "754d5578-d995-4502-af66-b164dfdf1189", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.757 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Releasing lock "refresh_cache-20ffb86b-b5ba-4818-82e4-14a755c48807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.758 2 DEBUG nova.compute.manager [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Instance network_info: |[{"id": "754d5578-d995-4502-af66-b164dfdf1189", "address": "fa:16:3e:5d:e9:f4", "network": {"id": "84428682-9eff-4658-a105-8c0d1de9c87f", "bridge": "br-int", "label": "tempest-network-smoke--1945963860", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap754d5578-d9", "ovs_interfaceid": "754d5578-d995-4502-af66-b164dfdf1189", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.758 2 DEBUG oslo_concurrency.lockutils [req-9e28075a-3aa8-48c9-9d35-5095a1b2805b req-d89bb926-5361-48c4-b3f2-e665275fa2b6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-20ffb86b-b5ba-4818-82e4-14a755c48807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.758 2 DEBUG nova.network.neutron [req-9e28075a-3aa8-48c9-9d35-5095a1b2805b req-d89bb926-5361-48c4-b3f2-e665275fa2b6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Refreshing network info cache for port 754d5578-d995-4502-af66-b164dfdf1189 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.760 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Start _get_guest_xml network_info=[{"id": "754d5578-d995-4502-af66-b164dfdf1189", "address": "fa:16:3e:5d:e9:f4", "network": {"id": "84428682-9eff-4658-a105-8c0d1de9c87f", "bridge": "br-int", "label": "tempest-network-smoke--1945963860", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap754d5578-d9", "ovs_interfaceid": "754d5578-d995-4502-af66-b164dfdf1189", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T10:08:49Z,direct_url=<?>,disk_format='qcow2',id=e5994bac-385d-4cfe-962e-386aa0559983,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9bebada0871a4efa9df99c6beff34c13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T10:08:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'device_type': 'disk', 'size': 0, 'image_id': 'e5994bac-385d-4cfe-962e-386aa0559983'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.764 2 WARNING nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.769 2 DEBUG nova.virt.libvirt.host [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.770 2 DEBUG nova.virt.libvirt.host [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.773 2 DEBUG nova.virt.libvirt.host [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.774 2 DEBUG nova.virt.libvirt.host [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.774 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.775 2 DEBUG nova.virt.hardware [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-08T10:08:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='461f98d6-ae65-4f86-8ae2-cc3cfaea2a46',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T10:08:49Z,direct_url=<?>,disk_format='qcow2',id=e5994bac-385d-4cfe-962e-386aa0559983,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9bebada0871a4efa9df99c6beff34c13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T10:08:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.775 2 DEBUG nova.virt.hardware [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.776 2 DEBUG nova.virt.hardware [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.776 2 DEBUG nova.virt.hardware [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.776 2 DEBUG nova.virt.hardware [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.776 2 DEBUG nova.virt.hardware [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.777 2 DEBUG nova.virt.hardware [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.777 2 DEBUG nova.virt.hardware [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.778 2 DEBUG nova.virt.hardware [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.778 2 DEBUG nova.virt.hardware [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.779 2 DEBUG nova.virt.hardware [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  8 06:20:53 np0005475493 nova_compute[262220]: 2025-10-08 10:20:53.782 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:20:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:20:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:20:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:20:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:20:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:20:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:54.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:20:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  8 06:20:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1935607548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.250 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.277 2 DEBUG nova.storage.rbd_utils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 20ffb86b-b5ba-4818-82e4-14a755c48807_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.281 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:20:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:20:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  8 06:20:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/838441368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.708 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.709 2 DEBUG nova.virt.libvirt.vif [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-08T10:20:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-451384508',display_name='tempest-TestNetworkBasicOps-server-451384508',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-451384508',id=13,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKzAAbR+LebFHZ4MQpbXVINvQrQE4iZi3jhjlRa4bUuBuh7BAgqwE3gXNZho6NGF97w7AAO52PK7tmiXY23liBZwBI0PDfy6ztl7vXddFfJ7MBnkOiMny5dlb5dxWiMeog==',key_name='tempest-TestNetworkBasicOps-1706390229',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-mat2tuft',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T10:20:49Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=20ffb86b-b5ba-4818-82e4-14a755c48807,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "754d5578-d995-4502-af66-b164dfdf1189", "address": "fa:16:3e:5d:e9:f4", "network": {"id": "84428682-9eff-4658-a105-8c0d1de9c87f", "bridge": "br-int", "label": "tempest-network-smoke--1945963860", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap754d5578-d9", "ovs_interfaceid": "754d5578-d995-4502-af66-b164dfdf1189", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.710 2 DEBUG nova.network.os_vif_util [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "754d5578-d995-4502-af66-b164dfdf1189", "address": "fa:16:3e:5d:e9:f4", "network": {"id": "84428682-9eff-4658-a105-8c0d1de9c87f", "bridge": "br-int", "label": "tempest-network-smoke--1945963860", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap754d5578-d9", "ovs_interfaceid": "754d5578-d995-4502-af66-b164dfdf1189", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.711 2 DEBUG nova.network.os_vif_util [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:e9:f4,bridge_name='br-int',has_traffic_filtering=True,id=754d5578-d995-4502-af66-b164dfdf1189,network=Network(84428682-9eff-4658-a105-8c0d1de9c87f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap754d5578-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.712 2 DEBUG nova.objects.instance [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'pci_devices' on Instance uuid 20ffb86b-b5ba-4818-82e4-14a755c48807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.747 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] End _get_guest_xml xml=<domain type="kvm">
Oct  8 06:20:54 np0005475493 nova_compute[262220]:  <uuid>20ffb86b-b5ba-4818-82e4-14a755c48807</uuid>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:  <name>instance-0000000d</name>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:  <memory>131072</memory>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:  <vcpu>1</vcpu>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:  <metadata>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <nova:name>tempest-TestNetworkBasicOps-server-451384508</nova:name>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <nova:creationTime>2025-10-08 10:20:53</nova:creationTime>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <nova:flavor name="m1.nano">
Oct  8 06:20:54 np0005475493 nova_compute[262220]:        <nova:memory>128</nova:memory>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:        <nova:disk>1</nova:disk>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:        <nova:swap>0</nova:swap>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:        <nova:ephemeral>0</nova:ephemeral>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:        <nova:vcpus>1</nova:vcpus>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      </nova:flavor>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <nova:owner>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:        <nova:user uuid="d50b19166a7245e390a6e29682191263">tempest-TestNetworkBasicOps-139500885-project-member</nova:user>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:        <nova:project uuid="0edb2c85b88b4b168a4b2e8c5ed4a05c">tempest-TestNetworkBasicOps-139500885</nova:project>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      </nova:owner>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <nova:root type="image" uuid="e5994bac-385d-4cfe-962e-386aa0559983"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <nova:ports>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:        <nova:port uuid="754d5578-d995-4502-af66-b164dfdf1189">
Oct  8 06:20:54 np0005475493 nova_compute[262220]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:        </nova:port>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      </nova:ports>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    </nova:instance>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:  </metadata>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:  <sysinfo type="smbios">
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <system>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <entry name="manufacturer">RDO</entry>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <entry name="product">OpenStack Compute</entry>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <entry name="serial">20ffb86b-b5ba-4818-82e4-14a755c48807</entry>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <entry name="uuid">20ffb86b-b5ba-4818-82e4-14a755c48807</entry>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <entry name="family">Virtual Machine</entry>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    </system>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:  </sysinfo>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:  <os>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <boot dev="hd"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <smbios mode="sysinfo"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:  </os>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:  <features>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <acpi/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <apic/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <vmcoreinfo/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:  </features>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:  <clock offset="utc">
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <timer name="pit" tickpolicy="delay"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <timer name="hpet" present="no"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:  </clock>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:  <cpu mode="host-model" match="exact">
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <topology sockets="1" cores="1" threads="1"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:  </cpu>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:  <devices>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <disk type="network" device="disk">
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <driver type="raw" cache="none"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <source protocol="rbd" name="vms/20ffb86b-b5ba-4818-82e4-14a755c48807_disk">
Oct  8 06:20:54 np0005475493 nova_compute[262220]:        <host name="192.168.122.100" port="6789"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:        <host name="192.168.122.102" port="6789"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:        <host name="192.168.122.101" port="6789"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      </source>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <auth username="openstack">
Oct  8 06:20:54 np0005475493 nova_compute[262220]:        <secret type="ceph" uuid="787292cc-8154-50c4-9e00-e9be3e817149"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      </auth>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <target dev="vda" bus="virtio"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    </disk>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <disk type="network" device="cdrom">
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <driver type="raw" cache="none"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <source protocol="rbd" name="vms/20ffb86b-b5ba-4818-82e4-14a755c48807_disk.config">
Oct  8 06:20:54 np0005475493 nova_compute[262220]:        <host name="192.168.122.100" port="6789"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:        <host name="192.168.122.102" port="6789"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:        <host name="192.168.122.101" port="6789"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      </source>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <auth username="openstack">
Oct  8 06:20:54 np0005475493 nova_compute[262220]:        <secret type="ceph" uuid="787292cc-8154-50c4-9e00-e9be3e817149"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      </auth>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <target dev="sda" bus="sata"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    </disk>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <interface type="ethernet">
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <mac address="fa:16:3e:5d:e9:f4"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <model type="virtio"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <driver name="vhost" rx_queue_size="512"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <mtu size="1442"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <target dev="tap754d5578-d9"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    </interface>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <serial type="pty">
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <log file="/var/lib/nova/instances/20ffb86b-b5ba-4818-82e4-14a755c48807/console.log" append="off"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    </serial>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <video>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <model type="virtio"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    </video>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <input type="tablet" bus="usb"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <rng model="virtio">
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <backend model="random">/dev/urandom</backend>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    </rng>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="pci" model="pcie-root-port"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <controller type="usb" index="0"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    <memballoon model="virtio">
Oct  8 06:20:54 np0005475493 nova_compute[262220]:      <stats period="10"/>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:    </memballoon>
Oct  8 06:20:54 np0005475493 nova_compute[262220]:  </devices>
Oct  8 06:20:54 np0005475493 nova_compute[262220]: </domain>
Oct  8 06:20:54 np0005475493 nova_compute[262220]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.748 2 DEBUG nova.compute.manager [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Preparing to wait for external event network-vif-plugged-754d5578-d995-4502-af66-b164dfdf1189 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.748 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.749 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.749 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.750 2 DEBUG nova.virt.libvirt.vif [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-08T10:20:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-451384508',display_name='tempest-TestNetworkBasicOps-server-451384508',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-451384508',id=13,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKzAAbR+LebFHZ4MQpbXVINvQrQE4iZi3jhjlRa4bUuBuh7BAgqwE3gXNZho6NGF97w7AAO52PK7tmiXY23liBZwBI0PDfy6ztl7vXddFfJ7MBnkOiMny5dlb5dxWiMeog==',key_name='tempest-TestNetworkBasicOps-1706390229',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-mat2tuft',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T10:20:49Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=20ffb86b-b5ba-4818-82e4-14a755c48807,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "754d5578-d995-4502-af66-b164dfdf1189", "address": "fa:16:3e:5d:e9:f4", "network": {"id": "84428682-9eff-4658-a105-8c0d1de9c87f", "bridge": "br-int", "label": "tempest-network-smoke--1945963860", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap754d5578-d9", "ovs_interfaceid": "754d5578-d995-4502-af66-b164dfdf1189", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.750 2 DEBUG nova.network.os_vif_util [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "754d5578-d995-4502-af66-b164dfdf1189", "address": "fa:16:3e:5d:e9:f4", "network": {"id": "84428682-9eff-4658-a105-8c0d1de9c87f", "bridge": "br-int", "label": "tempest-network-smoke--1945963860", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap754d5578-d9", "ovs_interfaceid": "754d5578-d995-4502-af66-b164dfdf1189", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.750 2 DEBUG nova.network.os_vif_util [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:e9:f4,bridge_name='br-int',has_traffic_filtering=True,id=754d5578-d995-4502-af66-b164dfdf1189,network=Network(84428682-9eff-4658-a105-8c0d1de9c87f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap754d5578-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.751 2 DEBUG os_vif [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:e9:f4,bridge_name='br-int',has_traffic_filtering=True,id=754d5578-d995-4502-af66-b164dfdf1189,network=Network(84428682-9eff-4658-a105-8c0d1de9c87f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap754d5578-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.752 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.752 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.755 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap754d5578-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.756 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap754d5578-d9, col_values=(('external_ids', {'iface-id': '754d5578-d995-4502-af66-b164dfdf1189', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:e9:f4', 'vm-uuid': '20ffb86b-b5ba-4818-82e4-14a755c48807'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:54 np0005475493 NetworkManager[44872]: <info>  [1759918854.7584] manager: (tap754d5578-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.765 2 INFO os_vif [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:e9:f4,bridge_name='br-int',has_traffic_filtering=True,id=754d5578-d995-4502-af66-b164dfdf1189,network=Network(84428682-9eff-4658-a105-8c0d1de9c87f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap754d5578-d9')#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.919 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.919 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.920 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] No VIF found with MAC fa:16:3e:5d:e9:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.920 2 INFO nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Using config drive#033[00m
Oct  8 06:20:54 np0005475493 nova_compute[262220]: 2025-10-08 10:20:54.952 2 DEBUG nova.storage.rbd_utils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 20ffb86b-b5ba-4818-82e4-14a755c48807_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:20:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:20:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:55.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:20:55 np0005475493 nova_compute[262220]: 2025-10-08 10:20:55.321 2 DEBUG nova.network.neutron [req-9e28075a-3aa8-48c9-9d35-5095a1b2805b req-d89bb926-5361-48c4-b3f2-e665275fa2b6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Updated VIF entry in instance network info cache for port 754d5578-d995-4502-af66-b164dfdf1189. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  8 06:20:55 np0005475493 nova_compute[262220]: 2025-10-08 10:20:55.322 2 DEBUG nova.network.neutron [req-9e28075a-3aa8-48c9-9d35-5095a1b2805b req-d89bb926-5361-48c4-b3f2-e665275fa2b6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Updating instance_info_cache with network_info: [{"id": "754d5578-d995-4502-af66-b164dfdf1189", "address": "fa:16:3e:5d:e9:f4", "network": {"id": "84428682-9eff-4658-a105-8c0d1de9c87f", "bridge": "br-int", "label": "tempest-network-smoke--1945963860", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap754d5578-d9", "ovs_interfaceid": "754d5578-d995-4502-af66-b164dfdf1189", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:20:55 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1090: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  8 06:20:55 np0005475493 nova_compute[262220]: 2025-10-08 10:20:55.445 2 DEBUG oslo_concurrency.lockutils [req-9e28075a-3aa8-48c9-9d35-5095a1b2805b req-d89bb926-5361-48c4-b3f2-e665275fa2b6 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-20ffb86b-b5ba-4818-82e4-14a755c48807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:20:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:20:55] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct  8 06:20:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:20:55] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct  8 06:20:55 np0005475493 nova_compute[262220]: 2025-10-08 10:20:55.872 2 INFO nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Creating config drive at /var/lib/nova/instances/20ffb86b-b5ba-4818-82e4-14a755c48807/disk.config#033[00m
Oct  8 06:20:55 np0005475493 nova_compute[262220]: 2025-10-08 10:20:55.877 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/20ffb86b-b5ba-4818-82e4-14a755c48807/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5wh3428j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:20:55 np0005475493 nova_compute[262220]: 2025-10-08 10:20:55.907 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:20:56 np0005475493 nova_compute[262220]: 2025-10-08 10:20:56.023 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/20ffb86b-b5ba-4818-82e4-14a755c48807/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5wh3428j" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:20:56 np0005475493 nova_compute[262220]: 2025-10-08 10:20:56.054 2 DEBUG nova.storage.rbd_utils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] rbd image 20ffb86b-b5ba-4818-82e4-14a755c48807_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  8 06:20:56 np0005475493 nova_compute[262220]: 2025-10-08 10:20:56.058 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/20ffb86b-b5ba-4818-82e4-14a755c48807/disk.config 20ffb86b-b5ba-4818-82e4-14a755c48807_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:20:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:56.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:56 np0005475493 nova_compute[262220]: 2025-10-08 10:20:56.222 2 DEBUG oslo_concurrency.processutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/20ffb86b-b5ba-4818-82e4-14a755c48807/disk.config 20ffb86b-b5ba-4818-82e4-14a755c48807_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:20:56 np0005475493 nova_compute[262220]: 2025-10-08 10:20:56.223 2 INFO nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Deleting local config drive /var/lib/nova/instances/20ffb86b-b5ba-4818-82e4-14a755c48807/disk.config because it was imported into RBD.#033[00m
Oct  8 06:20:56 np0005475493 NetworkManager[44872]: <info>  [1759918856.2768] manager: (tap754d5578-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Oct  8 06:20:56 np0005475493 kernel: tap754d5578-d9: entered promiscuous mode
Oct  8 06:20:56 np0005475493 nova_compute[262220]: 2025-10-08 10:20:56.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:56 np0005475493 ovn_controller[153187]: 2025-10-08T10:20:56Z|00068|binding|INFO|Claiming lport 754d5578-d995-4502-af66-b164dfdf1189 for this chassis.
Oct  8 06:20:56 np0005475493 ovn_controller[153187]: 2025-10-08T10:20:56Z|00069|binding|INFO|754d5578-d995-4502-af66-b164dfdf1189: Claiming fa:16:3e:5d:e9:f4 10.100.0.6
Oct  8 06:20:56 np0005475493 nova_compute[262220]: 2025-10-08 10:20:56.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.293 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:e9:f4 10.100.0.6'], port_security=['fa:16:3e:5d:e9:f4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '20ffb86b-b5ba-4818-82e4-14a755c48807', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84428682-9eff-4658-a105-8c0d1de9c87f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '16d57876-2c07-4569-9200-1b8e93dece9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9ad9bb7-7c7b-464c-bbd0-86ab756be37d, chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], logical_port=754d5578-d995-4502-af66-b164dfdf1189) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.295 163175 INFO neutron.agent.ovn.metadata.agent [-] Port 754d5578-d995-4502-af66-b164dfdf1189 in datapath 84428682-9eff-4658-a105-8c0d1de9c87f bound to our chassis#033[00m
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.296 163175 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84428682-9eff-4658-a105-8c0d1de9c87f#033[00m
Oct  8 06:20:56 np0005475493 systemd-udevd[283072]: Network interface NamePolicy= disabled on kernel command line.
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.307 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[63a60383-39fe-4bb7-b6fb-a742f309ed0a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.308 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap84428682-91 in ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  8 06:20:56 np0005475493 systemd-machined[216030]: New machine qemu-4-instance-0000000d.
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.314 267781 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap84428682-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.314 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[50931ffe-53d7-4233-b8d0-6b2274a493d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.315 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[aa252b14-8018-4e61-8977-1cf67ad18958]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:20:56 np0005475493 NetworkManager[44872]: <info>  [1759918856.3208] device (tap754d5578-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  8 06:20:56 np0005475493 NetworkManager[44872]: <info>  [1759918856.3227] device (tap754d5578-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.326 163290 DEBUG oslo.privsep.daemon [-] privsep: reply[3614025f-17f7-4a93-97e8-0455be8fffb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:20:56 np0005475493 systemd[1]: Started Virtual Machine qemu-4-instance-0000000d.
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.354 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[529e2ef0-00fb-4262-b780-c9ae777ba119]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:20:56 np0005475493 nova_compute[262220]: 2025-10-08 10:20:56.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:56 np0005475493 ovn_controller[153187]: 2025-10-08T10:20:56Z|00070|binding|INFO|Setting lport 754d5578-d995-4502-af66-b164dfdf1189 ovn-installed in OVS
Oct  8 06:20:56 np0005475493 ovn_controller[153187]: 2025-10-08T10:20:56Z|00071|binding|INFO|Setting lport 754d5578-d995-4502-af66-b164dfdf1189 up in Southbound
Oct  8 06:20:56 np0005475493 nova_compute[262220]: 2025-10-08 10:20:56.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.385 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[48cd43e4-8c3c-42f5-a9fb-b9636cab651b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.390 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[20b425f2-5b12-4e3b-b05f-af7de13d5e90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:20:56 np0005475493 NetworkManager[44872]: <info>  [1759918856.3924] manager: (tap84428682-90): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.425 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[5688bf14-4dd3-4879-bd2b-ae442ff48cb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.428 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[d0c4eadd-263c-4afb-930c-f19827d0466b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:20:56 np0005475493 NetworkManager[44872]: <info>  [1759918856.4501] device (tap84428682-90): carrier: link connected
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.457 267799 DEBUG oslo.privsep.daemon [-] privsep: reply[cd52d35f-ffc2-4b5a-8e15-079da6d9db27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.474 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[e0bde2e7-e843-41e4-8968-b48fe339e51b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84428682-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:de:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481508, 'reachable_time': 16262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283104, 'error': None, 'target': 'ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.497 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[955430c4-c030-4a5d-a527-0ca875c6cc3d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:de8d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 481508, 'tstamp': 481508}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283105, 'error': None, 'target': 'ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.514 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[7b37b536-1c18-49d0-9139-9d76a6e0215d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84428682-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:de:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481508, 'reachable_time': 16262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283106, 'error': None, 'target': 'ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.552 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[f1944d86-3a22-48c2-8003-8a3df97e2e09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.624 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[a7fb33f3-0d16-4475-b6da-03fd037341db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.626 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84428682-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.626 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.627 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84428682-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:20:56 np0005475493 nova_compute[262220]: 2025-10-08 10:20:56.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:56 np0005475493 kernel: tap84428682-90: entered promiscuous mode
Oct  8 06:20:56 np0005475493 NetworkManager[44872]: <info>  [1759918856.6302] manager: (tap84428682-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.636 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84428682-90, col_values=(('external_ids', {'iface-id': 'aead10e1-bf7c-4d43-bf9e-517a64e3ea62'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:20:56 np0005475493 nova_compute[262220]: 2025-10-08 10:20:56.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:56 np0005475493 ovn_controller[153187]: 2025-10-08T10:20:56Z|00072|binding|INFO|Releasing lport aead10e1-bf7c-4d43-bf9e-517a64e3ea62 from this chassis (sb_readonly=0)
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.640 163175 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/84428682-9eff-4658-a105-8c0d1de9c87f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/84428682-9eff-4658-a105-8c0d1de9c87f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.642 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[6f97d722-3589-4825-9b9f-22b9fa6e67ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.642 163175 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: global
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]:    log         /dev/log local0 debug
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]:    log-tag     haproxy-metadata-proxy-84428682-9eff-4658-a105-8c0d1de9c87f
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]:    user        root
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]:    group       root
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]:    maxconn     1024
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]:    pidfile     /var/lib/neutron/external/pids/84428682-9eff-4658-a105-8c0d1de9c87f.pid.haproxy
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]:    daemon
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: defaults
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]:    log global
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]:    mode http
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]:    option httplog
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]:    option dontlognull
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]:    option http-server-close
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]:    option forwardfor
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]:    retries                 3
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]:    timeout http-request    30s
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]:    timeout connect         30s
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]:    timeout client          32s
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]:    timeout server          32s
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]:    timeout http-keep-alive 30s
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: listen listener
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]:    bind 169.254.169.254:80
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]:    server metadata /var/lib/neutron/metadata_proxy
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]:    http-request add-header X-OVN-Network-ID 84428682-9eff-4658-a105-8c0d1de9c87f
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.643 163175 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f', 'env', 'PROCESS_TAG=haproxy-84428682-9eff-4658-a105-8c0d1de9c87f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/84428682-9eff-4658-a105-8c0d1de9c87f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  8 06:20:56 np0005475493 nova_compute[262220]: 2025-10-08 10:20:56.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:56 np0005475493 nova_compute[262220]: 2025-10-08 10:20:56.660 2 DEBUG nova.compute.manager [req-53ae439b-572f-427f-94c5-1ea4a98196f1 req-f8b07e25-5651-4abc-8e18-5cde1b710aa3 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Received event network-vif-plugged-754d5578-d995-4502-af66-b164dfdf1189 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:20:56 np0005475493 nova_compute[262220]: 2025-10-08 10:20:56.660 2 DEBUG oslo_concurrency.lockutils [req-53ae439b-572f-427f-94c5-1ea4a98196f1 req-f8b07e25-5651-4abc-8e18-5cde1b710aa3 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:20:56 np0005475493 nova_compute[262220]: 2025-10-08 10:20:56.661 2 DEBUG oslo_concurrency.lockutils [req-53ae439b-572f-427f-94c5-1ea4a98196f1 req-f8b07e25-5651-4abc-8e18-5cde1b710aa3 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:20:56 np0005475493 nova_compute[262220]: 2025-10-08 10:20:56.661 2 DEBUG oslo_concurrency.lockutils [req-53ae439b-572f-427f-94c5-1ea4a98196f1 req-f8b07e25-5651-4abc-8e18-5cde1b710aa3 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:20:56 np0005475493 nova_compute[262220]: 2025-10-08 10:20:56.661 2 DEBUG nova.compute.manager [req-53ae439b-572f-427f-94c5-1ea4a98196f1 req-f8b07e25-5651-4abc-8e18-5cde1b710aa3 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Processing event network-vif-plugged-754d5578-d995-4502-af66-b164dfdf1189 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  8 06:20:56 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:56.956 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  8 06:20:56 np0005475493 nova_compute[262220]: 2025-10-08 10:20:56.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:57 np0005475493 podman[283179]: 2025-10-08 10:20:57.03779082 +0000 UTC m=+0.050971948 container create a50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Oct  8 06:20:57 np0005475493 systemd[1]: Started libpod-conmon-a50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8.scope.
Oct  8 06:20:57 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:20:57 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9305ceef478b92232a6159096bf3391d562b676524b5a981565d66364f354e43/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  8 06:20:57 np0005475493 podman[283179]: 2025-10-08 10:20:57.012973218 +0000 UTC m=+0.026154376 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  8 06:20:57 np0005475493 podman[283179]: 2025-10-08 10:20:57.116682607 +0000 UTC m=+0.129863745 container init a50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  8 06:20:57 np0005475493 podman[283179]: 2025-10-08 10:20:57.126530076 +0000 UTC m=+0.139711214 container start a50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  8 06:20:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:57.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:57 np0005475493 neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f[283195]: [NOTICE]   (283199) : New worker (283201) forked
Oct  8 06:20:57 np0005475493 neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f[283195]: [NOTICE]   (283199) : Loading success.
Oct  8 06:20:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:57.193 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  8 06:20:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:57.195Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.213 2 DEBUG nova.compute.manager [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.215 2 DEBUG nova.virt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Emitting event <LifecycleEvent: 1759918857.212883, 20ffb86b-b5ba-4818-82e4-14a755c48807 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.215 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] VM Started (Lifecycle Event)#033[00m
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.231 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.235 2 INFO nova.virt.libvirt.driver [-] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Instance spawned successfully.#033[00m
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.235 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  8 06:20:57 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1091: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.360 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.363 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.381 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.381 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.382 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.383 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.383 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.384 2 DEBUG nova.virt.libvirt.driver [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.405 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.405 2 DEBUG nova.virt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Emitting event <LifecycleEvent: 1759918857.2152803, 20ffb86b-b5ba-4818-82e4-14a755c48807 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.405 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] VM Paused (Lifecycle Event)#033[00m
Oct  8 06:20:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:57.418 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:20:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:57.419 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:20:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:20:57.420 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.641 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.644 2 DEBUG nova.virt.driver [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] Emitting event <LifecycleEvent: 1759918857.218107, 20ffb86b-b5ba-4818-82e4-14a755c48807 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.644 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] VM Resumed (Lifecycle Event)#033[00m
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.763 2 INFO nova.compute.manager [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Took 8.47 seconds to spawn the instance on the hypervisor.#033[00m
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.764 2 DEBUG nova.compute.manager [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.770 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.773 2 DEBUG nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.834 2 INFO nova.compute.manager [None req-b93b2443-db0e-410e-bfa6-e75f4c06d8c3 - - - - - -] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.862 2 INFO nova.compute.manager [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Took 9.52 seconds to build instance.#033[00m
Oct  8 06:20:57 np0005475493 nova_compute[262220]: 2025-10-08 10:20:57.897 2 DEBUG oslo_concurrency.lockutils [None req-977647e5-4eb4-4bf3-840c-4572cd88b674 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:20:57 np0005475493 podman[283211]: 2025-10-08 10:20:57.901519501 +0000 UTC m=+0.054784821 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  8 06:20:57 np0005475493 podman[283212]: 2025-10-08 10:20:57.927245452 +0000 UTC m=+0.077221165 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  8 06:20:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:20:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:20:58.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:20:58 np0005475493 nova_compute[262220]: 2025-10-08 10:20:58.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:20:58 np0005475493 nova_compute[262220]: 2025-10-08 10:20:58.769 2 DEBUG nova.compute.manager [req-4cfb66e6-a728-4c9c-bd5c-3ebc72e2f00e req-009ecf50-ebc7-4bbd-beb0-0aa66a0f0acf 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Received event network-vif-plugged-754d5578-d995-4502-af66-b164dfdf1189 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:20:58 np0005475493 nova_compute[262220]: 2025-10-08 10:20:58.770 2 DEBUG oslo_concurrency.lockutils [req-4cfb66e6-a728-4c9c-bd5c-3ebc72e2f00e req-009ecf50-ebc7-4bbd-beb0-0aa66a0f0acf 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:20:58 np0005475493 nova_compute[262220]: 2025-10-08 10:20:58.770 2 DEBUG oslo_concurrency.lockutils [req-4cfb66e6-a728-4c9c-bd5c-3ebc72e2f00e req-009ecf50-ebc7-4bbd-beb0-0aa66a0f0acf 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:20:58 np0005475493 nova_compute[262220]: 2025-10-08 10:20:58.770 2 DEBUG oslo_concurrency.lockutils [req-4cfb66e6-a728-4c9c-bd5c-3ebc72e2f00e req-009ecf50-ebc7-4bbd-beb0-0aa66a0f0acf 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:20:58 np0005475493 nova_compute[262220]: 2025-10-08 10:20:58.771 2 DEBUG nova.compute.manager [req-4cfb66e6-a728-4c9c-bd5c-3ebc72e2f00e req-009ecf50-ebc7-4bbd-beb0-0aa66a0f0acf 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] No waiting events found dispatching network-vif-plugged-754d5578-d995-4502-af66-b164dfdf1189 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  8 06:20:58 np0005475493 nova_compute[262220]: 2025-10-08 10:20:58.771 2 WARNING nova.compute.manager [req-4cfb66e6-a728-4c9c-bd5c-3ebc72e2f00e req-009ecf50-ebc7-4bbd-beb0-0aa66a0f0acf 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Received unexpected event network-vif-plugged-754d5578-d995-4502-af66-b164dfdf1189 for instance with vm_state active and task_state None.#033[00m
Oct  8 06:20:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:20:58.851Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:20:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:20:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:20:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:20:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:20:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:20:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:20:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:20:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:20:59.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:20:59 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1092: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Oct  8 06:20:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:20:59 np0005475493 nova_compute[262220]: 2025-10-08 10:20:59.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:00 np0005475493 ovn_controller[153187]: 2025-10-08T10:21:00Z|00073|binding|INFO|Releasing lport aead10e1-bf7c-4d43-bf9e-517a64e3ea62 from this chassis (sb_readonly=0)
Oct  8 06:21:00 np0005475493 NetworkManager[44872]: <info>  [1759918860.1332] manager: (patch-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Oct  8 06:21:00 np0005475493 NetworkManager[44872]: <info>  [1759918860.1349] manager: (patch-br-int-to-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Oct  8 06:21:00 np0005475493 nova_compute[262220]: 2025-10-08 10:21:00.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:21:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:00.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:21:00 np0005475493 ovn_controller[153187]: 2025-10-08T10:21:00Z|00074|binding|INFO|Releasing lport aead10e1-bf7c-4d43-bf9e-517a64e3ea62 from this chassis (sb_readonly=0)
Oct  8 06:21:00 np0005475493 nova_compute[262220]: 2025-10-08 10:21:00.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:00 np0005475493 nova_compute[262220]: 2025-10-08 10:21:00.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:00 np0005475493 nova_compute[262220]: 2025-10-08 10:21:00.590 2 DEBUG nova.compute.manager [req-1fe53c74-2f6c-448b-b020-5e239b5b1313 req-57aae472-04d8-4c7c-b84f-64ef066fcc8c 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Received event network-changed-754d5578-d995-4502-af66-b164dfdf1189 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:21:00 np0005475493 nova_compute[262220]: 2025-10-08 10:21:00.590 2 DEBUG nova.compute.manager [req-1fe53c74-2f6c-448b-b020-5e239b5b1313 req-57aae472-04d8-4c7c-b84f-64ef066fcc8c 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Refreshing instance network info cache due to event network-changed-754d5578-d995-4502-af66-b164dfdf1189. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  8 06:21:00 np0005475493 nova_compute[262220]: 2025-10-08 10:21:00.590 2 DEBUG oslo_concurrency.lockutils [req-1fe53c74-2f6c-448b-b020-5e239b5b1313 req-57aae472-04d8-4c7c-b84f-64ef066fcc8c 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-20ffb86b-b5ba-4818-82e4-14a755c48807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:21:00 np0005475493 nova_compute[262220]: 2025-10-08 10:21:00.590 2 DEBUG oslo_concurrency.lockutils [req-1fe53c74-2f6c-448b-b020-5e239b5b1313 req-57aae472-04d8-4c7c-b84f-64ef066fcc8c 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-20ffb86b-b5ba-4818-82e4-14a755c48807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:21:00 np0005475493 nova_compute[262220]: 2025-10-08 10:21:00.590 2 DEBUG nova.network.neutron [req-1fe53c74-2f6c-448b-b020-5e239b5b1313 req-57aae472-04d8-4c7c-b84f-64ef066fcc8c 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Refreshing network info cache for port 754d5578-d995-4502-af66-b164dfdf1189 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  8 06:21:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:01.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:01 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1093: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Oct  8 06:21:01 np0005475493 nova_compute[262220]: 2025-10-08 10:21:01.518 2 DEBUG nova.network.neutron [req-1fe53c74-2f6c-448b-b020-5e239b5b1313 req-57aae472-04d8-4c7c-b84f-64ef066fcc8c 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Updated VIF entry in instance network info cache for port 754d5578-d995-4502-af66-b164dfdf1189. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  8 06:21:01 np0005475493 nova_compute[262220]: 2025-10-08 10:21:01.519 2 DEBUG nova.network.neutron [req-1fe53c74-2f6c-448b-b020-5e239b5b1313 req-57aae472-04d8-4c7c-b84f-64ef066fcc8c 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Updating instance_info_cache with network_info: [{"id": "754d5578-d995-4502-af66-b164dfdf1189", "address": "fa:16:3e:5d:e9:f4", "network": {"id": "84428682-9eff-4658-a105-8c0d1de9c87f", "bridge": "br-int", "label": "tempest-network-smoke--1945963860", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap754d5578-d9", "ovs_interfaceid": "754d5578-d995-4502-af66-b164dfdf1189", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:21:01 np0005475493 nova_compute[262220]: 2025-10-08 10:21:01.543 2 DEBUG oslo_concurrency.lockutils [req-1fe53c74-2f6c-448b-b020-5e239b5b1313 req-57aae472-04d8-4c7c-b84f-64ef066fcc8c 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-20ffb86b-b5ba-4818-82e4-14a755c48807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:21:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:21:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:02.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:21:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:21:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:21:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:21:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:21:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:21:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:21:02 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1094: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Oct  8 06:21:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:21:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:21:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:21:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:21:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:21:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:21:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:21:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:21:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:21:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:21:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:03.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:03 np0005475493 nova_compute[262220]: 2025-10-08 10:21:03.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:03 np0005475493 podman[283426]: 2025-10-08 10:21:03.488297452 +0000 UTC m=+0.044530509 container create 9716dd35eca77001f82761ed54bc7ccbde983384b6274fc41cd8392db1cb3ee3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:21:03 np0005475493 systemd[1]: Started libpod-conmon-9716dd35eca77001f82761ed54bc7ccbde983384b6274fc41cd8392db1cb3ee3.scope.
Oct  8 06:21:03 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:21:03 np0005475493 podman[283426]: 2025-10-08 10:21:03.470524388 +0000 UTC m=+0.026757455 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:21:03 np0005475493 podman[283426]: 2025-10-08 10:21:03.581515742 +0000 UTC m=+0.137748809 container init 9716dd35eca77001f82761ed54bc7ccbde983384b6274fc41cd8392db1cb3ee3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:21:03 np0005475493 podman[283426]: 2025-10-08 10:21:03.59011294 +0000 UTC m=+0.146345987 container start 9716dd35eca77001f82761ed54bc7ccbde983384b6274fc41cd8392db1cb3ee3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:21:03 np0005475493 podman[283426]: 2025-10-08 10:21:03.592734285 +0000 UTC m=+0.148967352 container attach 9716dd35eca77001f82761ed54bc7ccbde983384b6274fc41cd8392db1cb3ee3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  8 06:21:03 np0005475493 frosty_panini[283443]: 167 167
Oct  8 06:21:03 np0005475493 systemd[1]: libpod-9716dd35eca77001f82761ed54bc7ccbde983384b6274fc41cd8392db1cb3ee3.scope: Deactivated successfully.
Oct  8 06:21:03 np0005475493 conmon[283443]: conmon 9716dd35eca77001f827 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9716dd35eca77001f82761ed54bc7ccbde983384b6274fc41cd8392db1cb3ee3.scope/container/memory.events
Oct  8 06:21:03 np0005475493 podman[283426]: 2025-10-08 10:21:03.596496125 +0000 UTC m=+0.152729192 container died 9716dd35eca77001f82761ed54bc7ccbde983384b6274fc41cd8392db1cb3ee3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct  8 06:21:03 np0005475493 systemd[1]: var-lib-containers-storage-overlay-e980b9c50f3672dda04709a039eba3c84d331e2b9531596fe1ee1313016df19a-merged.mount: Deactivated successfully.
Oct  8 06:21:03 np0005475493 podman[283426]: 2025-10-08 10:21:03.636864469 +0000 UTC m=+0.193097516 container remove 9716dd35eca77001f82761ed54bc7ccbde983384b6274fc41cd8392db1cb3ee3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_panini, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  8 06:21:03 np0005475493 systemd[1]: libpod-conmon-9716dd35eca77001f82761ed54bc7ccbde983384b6274fc41cd8392db1cb3ee3.scope: Deactivated successfully.
Oct  8 06:21:03 np0005475493 podman[283466]: 2025-10-08 10:21:03.805144743 +0000 UTC m=+0.046332617 container create fb16413f0e95f990284ce8b427f8e9082190fe18117984bf52afd49372cf2d0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_edison, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  8 06:21:03 np0005475493 systemd[1]: Started libpod-conmon-fb16413f0e95f990284ce8b427f8e9082190fe18117984bf52afd49372cf2d0a.scope.
Oct  8 06:21:03 np0005475493 podman[283466]: 2025-10-08 10:21:03.78461843 +0000 UTC m=+0.025806324 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:21:03 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:21:03 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f432acd06e93ef527bc9db81a0b9f6934c3eb69ff83a81eb171c9b43a2e065fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:21:03 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f432acd06e93ef527bc9db81a0b9f6934c3eb69ff83a81eb171c9b43a2e065fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:21:03 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f432acd06e93ef527bc9db81a0b9f6934c3eb69ff83a81eb171c9b43a2e065fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:21:03 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f432acd06e93ef527bc9db81a0b9f6934c3eb69ff83a81eb171c9b43a2e065fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:21:03 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f432acd06e93ef527bc9db81a0b9f6934c3eb69ff83a81eb171c9b43a2e065fb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:21:03 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:21:03 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:21:03 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:21:03 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:21:03 np0005475493 podman[283466]: 2025-10-08 10:21:03.937239759 +0000 UTC m=+0.178427713 container init fb16413f0e95f990284ce8b427f8e9082190fe18117984bf52afd49372cf2d0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_edison, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  8 06:21:03 np0005475493 podman[283466]: 2025-10-08 10:21:03.943759979 +0000 UTC m=+0.184947893 container start fb16413f0e95f990284ce8b427f8e9082190fe18117984bf52afd49372cf2d0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:21:03 np0005475493 podman[283466]: 2025-10-08 10:21:03.949288197 +0000 UTC m=+0.190476101 container attach fb16413f0e95f990284ce8b427f8e9082190fe18117984bf52afd49372cf2d0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_edison, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct  8 06:21:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:21:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:21:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:21:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:21:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:04.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:04 np0005475493 charming_edison[283483]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:21:04 np0005475493 charming_edison[283483]: --> All data devices are unavailable
Oct  8 06:21:04 np0005475493 systemd[1]: libpod-fb16413f0e95f990284ce8b427f8e9082190fe18117984bf52afd49372cf2d0a.scope: Deactivated successfully.
Oct  8 06:21:04 np0005475493 podman[283466]: 2025-10-08 10:21:04.304271921 +0000 UTC m=+0.545459805 container died fb16413f0e95f990284ce8b427f8e9082190fe18117984bf52afd49372cf2d0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:21:04 np0005475493 systemd[1]: var-lib-containers-storage-overlay-f432acd06e93ef527bc9db81a0b9f6934c3eb69ff83a81eb171c9b43a2e065fb-merged.mount: Deactivated successfully.
Oct  8 06:21:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:21:04 np0005475493 podman[283466]: 2025-10-08 10:21:04.359948579 +0000 UTC m=+0.601136463 container remove fb16413f0e95f990284ce8b427f8e9082190fe18117984bf52afd49372cf2d0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_edison, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  8 06:21:04 np0005475493 systemd[1]: libpod-conmon-fb16413f0e95f990284ce8b427f8e9082190fe18117984bf52afd49372cf2d0a.scope: Deactivated successfully.
Oct  8 06:21:04 np0005475493 nova_compute[262220]: 2025-10-08 10:21:04.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:04 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1095: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Oct  8 06:21:05 np0005475493 podman[283599]: 2025-10-08 10:21:05.053286587 +0000 UTC m=+0.049377985 container create eeca578d42a035890efb3f4f255ba8bd325ba05e0b254c100bb3728a3e9107c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_wilson, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:21:05 np0005475493 systemd[1]: Started libpod-conmon-eeca578d42a035890efb3f4f255ba8bd325ba05e0b254c100bb3728a3e9107c3.scope.
Oct  8 06:21:05 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:21:05 np0005475493 podman[283599]: 2025-10-08 10:21:05.032376112 +0000 UTC m=+0.028467540 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:21:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:21:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:05.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:21:05 np0005475493 podman[283599]: 2025-10-08 10:21:05.154672051 +0000 UTC m=+0.150763469 container init eeca578d42a035890efb3f4f255ba8bd325ba05e0b254c100bb3728a3e9107c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_wilson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:21:05 np0005475493 podman[283599]: 2025-10-08 10:21:05.161699767 +0000 UTC m=+0.157791165 container start eeca578d42a035890efb3f4f255ba8bd325ba05e0b254c100bb3728a3e9107c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_wilson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  8 06:21:05 np0005475493 podman[283599]: 2025-10-08 10:21:05.165644115 +0000 UTC m=+0.161735513 container attach eeca578d42a035890efb3f4f255ba8bd325ba05e0b254c100bb3728a3e9107c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_wilson, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:21:05 np0005475493 competent_wilson[283616]: 167 167
Oct  8 06:21:05 np0005475493 systemd[1]: libpod-eeca578d42a035890efb3f4f255ba8bd325ba05e0b254c100bb3728a3e9107c3.scope: Deactivated successfully.
Oct  8 06:21:05 np0005475493 conmon[283616]: conmon eeca578d42a035890efb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eeca578d42a035890efb3f4f255ba8bd325ba05e0b254c100bb3728a3e9107c3.scope/container/memory.events
Oct  8 06:21:05 np0005475493 podman[283599]: 2025-10-08 10:21:05.169378696 +0000 UTC m=+0.165470114 container died eeca578d42a035890efb3f4f255ba8bd325ba05e0b254c100bb3728a3e9107c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  8 06:21:05 np0005475493 systemd[1]: var-lib-containers-storage-overlay-5867df47bc60c7a3eeef65aa44f38d516d1cea97906dd30fa2fe7c3f95dee430-merged.mount: Deactivated successfully.
Oct  8 06:21:05 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:21:05.195 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:21:05 np0005475493 podman[283599]: 2025-10-08 10:21:05.20360849 +0000 UTC m=+0.199699888 container remove eeca578d42a035890efb3f4f255ba8bd325ba05e0b254c100bb3728a3e9107c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:21:05 np0005475493 systemd[1]: libpod-conmon-eeca578d42a035890efb3f4f255ba8bd325ba05e0b254c100bb3728a3e9107c3.scope: Deactivated successfully.
Oct  8 06:21:05 np0005475493 podman[283640]: 2025-10-08 10:21:05.380305836 +0000 UTC m=+0.057611251 container create 19e02c3df2032a8fad63005f0fd3120601df069c5d449e407051fa3f5df3292c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_bhabha, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  8 06:21:05 np0005475493 systemd[1]: Started libpod-conmon-19e02c3df2032a8fad63005f0fd3120601df069c5d449e407051fa3f5df3292c.scope.
Oct  8 06:21:05 np0005475493 podman[283640]: 2025-10-08 10:21:05.353627845 +0000 UTC m=+0.030933330 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:21:05 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:21:05 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd00f409b760c558a096ba3488869e7bd83e691b9bfdc9b49a1815a59b54ec6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:21:05 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd00f409b760c558a096ba3488869e7bd83e691b9bfdc9b49a1815a59b54ec6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:21:05 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd00f409b760c558a096ba3488869e7bd83e691b9bfdc9b49a1815a59b54ec6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:21:05 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd00f409b760c558a096ba3488869e7bd83e691b9bfdc9b49a1815a59b54ec6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:21:05 np0005475493 podman[283640]: 2025-10-08 10:21:05.484239742 +0000 UTC m=+0.161545157 container init 19e02c3df2032a8fad63005f0fd3120601df069c5d449e407051fa3f5df3292c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_bhabha, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:21:05 np0005475493 podman[283640]: 2025-10-08 10:21:05.493196882 +0000 UTC m=+0.170502287 container start 19e02c3df2032a8fad63005f0fd3120601df069c5d449e407051fa3f5df3292c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  8 06:21:05 np0005475493 podman[283640]: 2025-10-08 10:21:05.496684525 +0000 UTC m=+0.173989950 container attach 19e02c3df2032a8fad63005f0fd3120601df069c5d449e407051fa3f5df3292c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_bhabha, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  8 06:21:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:21:05] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Oct  8 06:21:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:21:05] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]: {
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:    "1": [
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:        {
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:            "devices": [
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:                "/dev/loop3"
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:            ],
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:            "lv_name": "ceph_lv0",
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:            "lv_size": "21470642176",
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:            "name": "ceph_lv0",
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:            "tags": {
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:                "ceph.cluster_name": "ceph",
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:                "ceph.crush_device_class": "",
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:                "ceph.encrypted": "0",
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:                "ceph.osd_id": "1",
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:                "ceph.type": "block",
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:                "ceph.vdo": "0",
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:                "ceph.with_tpm": "0"
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:            },
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:            "type": "block",
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:            "vg_name": "ceph_vg0"
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:        }
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]:    ]
Oct  8 06:21:05 np0005475493 interesting_bhabha[283657]: }
Oct  8 06:21:05 np0005475493 systemd[1]: libpod-19e02c3df2032a8fad63005f0fd3120601df069c5d449e407051fa3f5df3292c.scope: Deactivated successfully.
Oct  8 06:21:05 np0005475493 podman[283640]: 2025-10-08 10:21:05.808027158 +0000 UTC m=+0.485332563 container died 19e02c3df2032a8fad63005f0fd3120601df069c5d449e407051fa3f5df3292c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_bhabha, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  8 06:21:05 np0005475493 systemd[1]: var-lib-containers-storage-overlay-5fd00f409b760c558a096ba3488869e7bd83e691b9bfdc9b49a1815a59b54ec6-merged.mount: Deactivated successfully.
Oct  8 06:21:05 np0005475493 podman[283640]: 2025-10-08 10:21:05.863332074 +0000 UTC m=+0.540637519 container remove 19e02c3df2032a8fad63005f0fd3120601df069c5d449e407051fa3f5df3292c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_bhabha, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct  8 06:21:05 np0005475493 systemd[1]: libpod-conmon-19e02c3df2032a8fad63005f0fd3120601df069c5d449e407051fa3f5df3292c.scope: Deactivated successfully.
Oct  8 06:21:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:21:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:06.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:21:06 np0005475493 podman[283773]: 2025-10-08 10:21:06.4472694 +0000 UTC m=+0.048988563 container create 7f77a41fdb9d517846dd70c9fa5ef203b9f4dbfcb9fbb5a8e3e05773f368cb02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:21:06 np0005475493 systemd[1]: Started libpod-conmon-7f77a41fdb9d517846dd70c9fa5ef203b9f4dbfcb9fbb5a8e3e05773f368cb02.scope.
Oct  8 06:21:06 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:21:06 np0005475493 podman[283773]: 2025-10-08 10:21:06.426547161 +0000 UTC m=+0.028266354 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:21:06 np0005475493 podman[283773]: 2025-10-08 10:21:06.522310462 +0000 UTC m=+0.124029615 container init 7f77a41fdb9d517846dd70c9fa5ef203b9f4dbfcb9fbb5a8e3e05773f368cb02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hermann, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct  8 06:21:06 np0005475493 podman[283773]: 2025-10-08 10:21:06.528859324 +0000 UTC m=+0.130578487 container start 7f77a41fdb9d517846dd70c9fa5ef203b9f4dbfcb9fbb5a8e3e05773f368cb02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:21:06 np0005475493 podman[283773]: 2025-10-08 10:21:06.532607646 +0000 UTC m=+0.134326829 container attach 7f77a41fdb9d517846dd70c9fa5ef203b9f4dbfcb9fbb5a8e3e05773f368cb02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hermann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:21:06 np0005475493 dreamy_hermann[283790]: 167 167
Oct  8 06:21:06 np0005475493 systemd[1]: libpod-7f77a41fdb9d517846dd70c9fa5ef203b9f4dbfcb9fbb5a8e3e05773f368cb02.scope: Deactivated successfully.
Oct  8 06:21:06 np0005475493 podman[283773]: 2025-10-08 10:21:06.53401091 +0000 UTC m=+0.135730063 container died 7f77a41fdb9d517846dd70c9fa5ef203b9f4dbfcb9fbb5a8e3e05773f368cb02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct  8 06:21:06 np0005475493 systemd[1]: var-lib-containers-storage-overlay-7dafc3cc8d41f2cfe9db45b0847c94394823fd6beb55eef00a0a26fe20186932-merged.mount: Deactivated successfully.
Oct  8 06:21:06 np0005475493 podman[283773]: 2025-10-08 10:21:06.569993622 +0000 UTC m=+0.171712775 container remove 7f77a41fdb9d517846dd70c9fa5ef203b9f4dbfcb9fbb5a8e3e05773f368cb02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hermann, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:21:06 np0005475493 systemd[1]: libpod-conmon-7f77a41fdb9d517846dd70c9fa5ef203b9f4dbfcb9fbb5a8e3e05773f368cb02.scope: Deactivated successfully.
Oct  8 06:21:06 np0005475493 podman[283813]: 2025-10-08 10:21:06.755067719 +0000 UTC m=+0.049290743 container create 66a90f0e25d32cb1eeec06992ba93c38779d11432fc80a1cbe7ec295b8f2df76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_lamarr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:21:06 np0005475493 systemd[1]: Started libpod-conmon-66a90f0e25d32cb1eeec06992ba93c38779d11432fc80a1cbe7ec295b8f2df76.scope.
Oct  8 06:21:06 np0005475493 podman[283813]: 2025-10-08 10:21:06.736596052 +0000 UTC m=+0.030819096 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:21:06 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:21:06 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c279a2d3aa1a6345b1bf8153ea09a05564686780077d829b485f2446c26aad65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:21:06 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c279a2d3aa1a6345b1bf8153ea09a05564686780077d829b485f2446c26aad65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:21:06 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c279a2d3aa1a6345b1bf8153ea09a05564686780077d829b485f2446c26aad65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:21:06 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c279a2d3aa1a6345b1bf8153ea09a05564686780077d829b485f2446c26aad65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:21:06 np0005475493 podman[283813]: 2025-10-08 10:21:06.867346464 +0000 UTC m=+0.161569558 container init 66a90f0e25d32cb1eeec06992ba93c38779d11432fc80a1cbe7ec295b8f2df76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct  8 06:21:06 np0005475493 podman[283813]: 2025-10-08 10:21:06.873691309 +0000 UTC m=+0.167914323 container start 66a90f0e25d32cb1eeec06992ba93c38779d11432fc80a1cbe7ec295b8f2df76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  8 06:21:06 np0005475493 podman[283813]: 2025-10-08 10:21:06.877205723 +0000 UTC m=+0.171428827 container attach 66a90f0e25d32cb1eeec06992ba93c38779d11432fc80a1cbe7ec295b8f2df76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  8 06:21:06 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1096: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 77 op/s
Oct  8 06:21:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:21:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:07.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:21:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:07.195Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:21:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:07.195Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:21:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:07.197Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:21:07 np0005475493 lvm[283905]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:21:07 np0005475493 lvm[283905]: VG ceph_vg0 finished
Oct  8 06:21:07 np0005475493 charming_lamarr[283830]: {}
Oct  8 06:21:07 np0005475493 systemd[1]: libpod-66a90f0e25d32cb1eeec06992ba93c38779d11432fc80a1cbe7ec295b8f2df76.scope: Deactivated successfully.
Oct  8 06:21:07 np0005475493 podman[283813]: 2025-10-08 10:21:07.657454068 +0000 UTC m=+0.951677082 container died 66a90f0e25d32cb1eeec06992ba93c38779d11432fc80a1cbe7ec295b8f2df76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  8 06:21:07 np0005475493 systemd[1]: libpod-66a90f0e25d32cb1eeec06992ba93c38779d11432fc80a1cbe7ec295b8f2df76.scope: Consumed 1.114s CPU time.
Oct  8 06:21:07 np0005475493 systemd[1]: var-lib-containers-storage-overlay-c279a2d3aa1a6345b1bf8153ea09a05564686780077d829b485f2446c26aad65-merged.mount: Deactivated successfully.
Oct  8 06:21:07 np0005475493 podman[283813]: 2025-10-08 10:21:07.70613871 +0000 UTC m=+1.000361724 container remove 66a90f0e25d32cb1eeec06992ba93c38779d11432fc80a1cbe7ec295b8f2df76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_lamarr, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  8 06:21:07 np0005475493 systemd[1]: libpod-conmon-66a90f0e25d32cb1eeec06992ba93c38779d11432fc80a1cbe7ec295b8f2df76.scope: Deactivated successfully.
Oct  8 06:21:07 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:21:07 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:21:07 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:21:07 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:21:07 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:21:07 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:21:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:08.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:08 np0005475493 nova_compute[262220]: 2025-10-08 10:21:08.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:08.851Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:21:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:08.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:21:08 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1097: 353 pgs: 353 active+clean; 109 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Oct  8 06:21:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:21:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:21:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:21:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:21:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:21:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:09.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:21:09 np0005475493 ovn_controller[153187]: 2025-10-08T10:21:09Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5d:e9:f4 10.100.0.6
Oct  8 06:21:09 np0005475493 ovn_controller[153187]: 2025-10-08T10:21:09Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:e9:f4 10.100.0.6
Oct  8 06:21:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:21:09 np0005475493 nova_compute[262220]: 2025-10-08 10:21:09.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:10.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:10 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1098: 353 pgs: 353 active+clean; 109 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 524 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Oct  8 06:21:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:21:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:11.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:21:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:21:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:12.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:21:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1099: 353 pgs: 353 active+clean; 109 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 524 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Oct  8 06:21:12 np0005475493 podman[283974]: 2025-10-08 10:21:12.934991874 +0000 UTC m=+0.086415361 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  8 06:21:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:13.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:13 np0005475493 nova_compute[262220]: 2025-10-08 10:21:13.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:21:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:21:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:21:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:21:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:14.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:21:14 np0005475493 nova_compute[262220]: 2025-10-08 10:21:14.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1100: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 605 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Oct  8 06:21:15 np0005475493 nova_compute[262220]: 2025-10-08 10:21:15.045 2 INFO nova.compute.manager [None req-b7e65dd7-0c37-4fb1-a4e5-af46f3e28783 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Get console output#033[00m
Oct  8 06:21:15 np0005475493 nova_compute[262220]: 2025-10-08 10:21:15.050 631 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  8 06:21:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:21:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:15.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:21:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:21:15] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Oct  8 06:21:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:21:15] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Oct  8 06:21:15 np0005475493 ovn_controller[153187]: 2025-10-08T10:21:15Z|00075|binding|INFO|Releasing lport aead10e1-bf7c-4d43-bf9e-517a64e3ea62 from this chassis (sb_readonly=0)
Oct  8 06:21:15 np0005475493 nova_compute[262220]: 2025-10-08 10:21:15.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:15 np0005475493 ovn_controller[153187]: 2025-10-08T10:21:15Z|00076|binding|INFO|Releasing lport aead10e1-bf7c-4d43-bf9e-517a64e3ea62 from this chassis (sb_readonly=0)
Oct  8 06:21:15 np0005475493 nova_compute[262220]: 2025-10-08 10:21:15.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:21:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:16.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:21:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1101: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  8 06:21:17 np0005475493 nova_compute[262220]: 2025-10-08 10:21:17.091 2 INFO nova.compute.manager [None req-fd9c196e-c764-4c81-9d2a-89372caff073 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Get console output#033[00m
Oct  8 06:21:17 np0005475493 nova_compute[262220]: 2025-10-08 10:21:17.099 631 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  8 06:21:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:21:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:17.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:21:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:17.198Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:21:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:21:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:21:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:21:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:21:17 np0005475493 nova_compute[262220]: 2025-10-08 10:21:17.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:17 np0005475493 NetworkManager[44872]: <info>  [1759918877.9771] manager: (patch-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Oct  8 06:21:17 np0005475493 NetworkManager[44872]: <info>  [1759918877.9789] manager: (patch-br-int-to-provnet-5eda3e1f-dd4f-4c7d-b5fb-d8c0d9996d6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Oct  8 06:21:18 np0005475493 nova_compute[262220]: 2025-10-08 10:21:18.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:18 np0005475493 ovn_controller[153187]: 2025-10-08T10:21:18Z|00077|binding|INFO|Releasing lport aead10e1-bf7c-4d43-bf9e-517a64e3ea62 from this chassis (sb_readonly=0)
Oct  8 06:21:18 np0005475493 ovn_controller[153187]: 2025-10-08T10:21:18Z|00078|binding|INFO|Releasing lport aead10e1-bf7c-4d43-bf9e-517a64e3ea62 from this chassis (sb_readonly=0)
Oct  8 06:21:18 np0005475493 nova_compute[262220]: 2025-10-08 10:21:18.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:21:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:21:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:21:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:21:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:18.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:18 np0005475493 nova_compute[262220]: 2025-10-08 10:21:18.238 2 INFO nova.compute.manager [None req-77a15bed-ce4b-4893-be27-147d1f7ae8fd d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Get console output#033[00m
Oct  8 06:21:18 np0005475493 nova_compute[262220]: 2025-10-08 10:21:18.244 631 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  8 06:21:18 np0005475493 nova_compute[262220]: 2025-10-08 10:21:18.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:18.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:21:18 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1102: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct  8 06:21:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:21:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:21:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:21:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:21:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:19.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:21:19 np0005475493 nova_compute[262220]: 2025-10-08 10:21:19.844 2 DEBUG nova.compute.manager [req-82538725-4911-4429-a23d-4de6ec61ad28 req-1affc0b0-140a-41a5-aa07-eeb5cace1008 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Received event network-changed-754d5578-d995-4502-af66-b164dfdf1189 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:21:19 np0005475493 nova_compute[262220]: 2025-10-08 10:21:19.845 2 DEBUG nova.compute.manager [req-82538725-4911-4429-a23d-4de6ec61ad28 req-1affc0b0-140a-41a5-aa07-eeb5cace1008 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Refreshing instance network info cache due to event network-changed-754d5578-d995-4502-af66-b164dfdf1189. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  8 06:21:19 np0005475493 nova_compute[262220]: 2025-10-08 10:21:19.845 2 DEBUG oslo_concurrency.lockutils [req-82538725-4911-4429-a23d-4de6ec61ad28 req-1affc0b0-140a-41a5-aa07-eeb5cace1008 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "refresh_cache-20ffb86b-b5ba-4818-82e4-14a755c48807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  8 06:21:19 np0005475493 nova_compute[262220]: 2025-10-08 10:21:19.845 2 DEBUG oslo_concurrency.lockutils [req-82538725-4911-4429-a23d-4de6ec61ad28 req-1affc0b0-140a-41a5-aa07-eeb5cace1008 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquired lock "refresh_cache-20ffb86b-b5ba-4818-82e4-14a755c48807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  8 06:21:19 np0005475493 nova_compute[262220]: 2025-10-08 10:21:19.845 2 DEBUG nova.network.neutron [req-82538725-4911-4429-a23d-4de6ec61ad28 req-1affc0b0-140a-41a5-aa07-eeb5cace1008 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Refreshing network info cache for port 754d5578-d995-4502-af66-b164dfdf1189 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  8 06:21:19 np0005475493 nova_compute[262220]: 2025-10-08 10:21:19.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:21:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:20.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:21:20 np0005475493 nova_compute[262220]: 2025-10-08 10:21:20.484 2 DEBUG oslo_concurrency.lockutils [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "20ffb86b-b5ba-4818-82e4-14a755c48807" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:21:20 np0005475493 nova_compute[262220]: 2025-10-08 10:21:20.484 2 DEBUG oslo_concurrency.lockutils [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:21:20 np0005475493 nova_compute[262220]: 2025-10-08 10:21:20.485 2 DEBUG oslo_concurrency.lockutils [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:21:20 np0005475493 nova_compute[262220]: 2025-10-08 10:21:20.485 2 DEBUG oslo_concurrency.lockutils [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:21:20 np0005475493 nova_compute[262220]: 2025-10-08 10:21:20.486 2 DEBUG oslo_concurrency.lockutils [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:21:20 np0005475493 nova_compute[262220]: 2025-10-08 10:21:20.488 2 INFO nova.compute.manager [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Terminating instance#033[00m
Oct  8 06:21:20 np0005475493 nova_compute[262220]: 2025-10-08 10:21:20.490 2 DEBUG nova.compute.manager [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  8 06:21:20 np0005475493 kernel: tap754d5578-d9 (unregistering): left promiscuous mode
Oct  8 06:21:20 np0005475493 NetworkManager[44872]: <info>  [1759918880.5524] device (tap754d5578-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  8 06:21:20 np0005475493 ovn_controller[153187]: 2025-10-08T10:21:20Z|00079|binding|INFO|Releasing lport 754d5578-d995-4502-af66-b164dfdf1189 from this chassis (sb_readonly=0)
Oct  8 06:21:20 np0005475493 ovn_controller[153187]: 2025-10-08T10:21:20Z|00080|binding|INFO|Setting lport 754d5578-d995-4502-af66-b164dfdf1189 down in Southbound
Oct  8 06:21:20 np0005475493 ovn_controller[153187]: 2025-10-08T10:21:20Z|00081|binding|INFO|Removing iface tap754d5578-d9 ovn-installed in OVS
Oct  8 06:21:20 np0005475493 nova_compute[262220]: 2025-10-08 10:21:20.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:20 np0005475493 nova_compute[262220]: 2025-10-08 10:21:20.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:20 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.581 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:e9:f4 10.100.0.6'], port_security=['fa:16:3e:5d:e9:f4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '20ffb86b-b5ba-4818-82e4-14a755c48807', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84428682-9eff-4658-a105-8c0d1de9c87f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0edb2c85b88b4b168a4b2e8c5ed4a05c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '16d57876-2c07-4569-9200-1b8e93dece9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9ad9bb7-7c7b-464c-bbd0-86ab756be37d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>], logical_port=754d5578-d995-4502-af66-b164dfdf1189) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f191f105a60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  8 06:21:20 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.590 163175 INFO neutron.agent.ovn.metadata.agent [-] Port 754d5578-d995-4502-af66-b164dfdf1189 in datapath 84428682-9eff-4658-a105-8c0d1de9c87f unbound from our chassis#033[00m
Oct  8 06:21:20 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.592 163175 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84428682-9eff-4658-a105-8c0d1de9c87f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  8 06:21:20 np0005475493 nova_compute[262220]: 2025-10-08 10:21:20.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:20 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.593 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[dc83fafa-39ce-4111-9187-1cf4ff2f1949]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:21:20 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.595 163175 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f namespace which is not needed anymore#033[00m
Oct  8 06:21:20 np0005475493 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Oct  8 06:21:20 np0005475493 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000d.scope: Consumed 12.962s CPU time.
Oct  8 06:21:20 np0005475493 systemd-machined[216030]: Machine qemu-4-instance-0000000d terminated.
Oct  8 06:21:20 np0005475493 neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f[283195]: [NOTICE]   (283199) : haproxy version is 2.8.14-c23fe91
Oct  8 06:21:20 np0005475493 neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f[283195]: [NOTICE]   (283199) : path to executable is /usr/sbin/haproxy
Oct  8 06:21:20 np0005475493 neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f[283195]: [WARNING]  (283199) : Exiting Master process...
Oct  8 06:21:20 np0005475493 neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f[283195]: [ALERT]    (283199) : Current worker (283201) exited with code 143 (Terminated)
Oct  8 06:21:20 np0005475493 neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f[283195]: [WARNING]  (283199) : All workers exited. Exiting... (0)
Oct  8 06:21:20 np0005475493 systemd[1]: libpod-a50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8.scope: Deactivated successfully.
Oct  8 06:21:20 np0005475493 nova_compute[262220]: 2025-10-08 10:21:20.727 2 INFO nova.virt.libvirt.driver [-] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Instance destroyed successfully.#033[00m
Oct  8 06:21:20 np0005475493 nova_compute[262220]: 2025-10-08 10:21:20.728 2 DEBUG nova.objects.instance [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lazy-loading 'resources' on Instance uuid 20ffb86b-b5ba-4818-82e4-14a755c48807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  8 06:21:20 np0005475493 podman[284028]: 2025-10-08 10:21:20.729860975 +0000 UTC m=+0.052523067 container died a50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  8 06:21:20 np0005475493 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8-userdata-shm.mount: Deactivated successfully.
Oct  8 06:21:20 np0005475493 systemd[1]: var-lib-containers-storage-overlay-9305ceef478b92232a6159096bf3391d562b676524b5a981565d66364f354e43-merged.mount: Deactivated successfully.
Oct  8 06:21:20 np0005475493 podman[284028]: 2025-10-08 10:21:20.772418879 +0000 UTC m=+0.095080971 container cleanup a50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  8 06:21:20 np0005475493 systemd[1]: libpod-conmon-a50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8.scope: Deactivated successfully.
Oct  8 06:21:20 np0005475493 nova_compute[262220]: 2025-10-08 10:21:20.795 2 DEBUG nova.virt.libvirt.vif [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-08T10:20:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-451384508',display_name='tempest-TestNetworkBasicOps-server-451384508',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-451384508',id=13,image_ref='e5994bac-385d-4cfe-962e-386aa0559983',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKzAAbR+LebFHZ4MQpbXVINvQrQE4iZi3jhjlRa4bUuBuh7BAgqwE3gXNZho6NGF97w7AAO52PK7tmiXY23liBZwBI0PDfy6ztl7vXddFfJ7MBnkOiMny5dlb5dxWiMeog==',key_name='tempest-TestNetworkBasicOps-1706390229',keypairs=<?>,launch_index=0,launched_at=2025-10-08T10:20:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0edb2c85b88b4b168a4b2e8c5ed4a05c',ramdisk_id='',reservation_id='r-mat2tuft',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e5994bac-385d-4cfe-962e-386aa0559983',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-139500885',owner_user_name='tempest-TestNetworkBasicOps-139500885-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T10:20:57Z,user_data=None,user_id='d50b19166a7245e390a6e29682191263',uuid=20ffb86b-b5ba-4818-82e4-14a755c48807,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "754d5578-d995-4502-af66-b164dfdf1189", "address": "fa:16:3e:5d:e9:f4", "network": {"id": "84428682-9eff-4658-a105-8c0d1de9c87f", "bridge": "br-int", "label": "tempest-network-smoke--1945963860", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap754d5578-d9", "ovs_interfaceid": "754d5578-d995-4502-af66-b164dfdf1189", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  8 06:21:20 np0005475493 nova_compute[262220]: 2025-10-08 10:21:20.796 2 DEBUG nova.network.os_vif_util [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converting VIF {"id": "754d5578-d995-4502-af66-b164dfdf1189", "address": "fa:16:3e:5d:e9:f4", "network": {"id": "84428682-9eff-4658-a105-8c0d1de9c87f", "bridge": "br-int", "label": "tempest-network-smoke--1945963860", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap754d5578-d9", "ovs_interfaceid": "754d5578-d995-4502-af66-b164dfdf1189", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  8 06:21:20 np0005475493 nova_compute[262220]: 2025-10-08 10:21:20.797 2 DEBUG nova.network.os_vif_util [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:e9:f4,bridge_name='br-int',has_traffic_filtering=True,id=754d5578-d995-4502-af66-b164dfdf1189,network=Network(84428682-9eff-4658-a105-8c0d1de9c87f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap754d5578-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  8 06:21:20 np0005475493 nova_compute[262220]: 2025-10-08 10:21:20.797 2 DEBUG os_vif [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:e9:f4,bridge_name='br-int',has_traffic_filtering=True,id=754d5578-d995-4502-af66-b164dfdf1189,network=Network(84428682-9eff-4658-a105-8c0d1de9c87f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap754d5578-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  8 06:21:20 np0005475493 nova_compute[262220]: 2025-10-08 10:21:20.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:20 np0005475493 nova_compute[262220]: 2025-10-08 10:21:20.799 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap754d5578-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:21:20 np0005475493 nova_compute[262220]: 2025-10-08 10:21:20.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:20 np0005475493 nova_compute[262220]: 2025-10-08 10:21:20.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  8 06:21:20 np0005475493 nova_compute[262220]: 2025-10-08 10:21:20.841 2 INFO os_vif [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:e9:f4,bridge_name='br-int',has_traffic_filtering=True,id=754d5578-d995-4502-af66-b164dfdf1189,network=Network(84428682-9eff-4658-a105-8c0d1de9c87f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap754d5578-d9')#033[00m
Oct  8 06:21:20 np0005475493 podman[284067]: 2025-10-08 10:21:20.871723536 +0000 UTC m=+0.076696527 container remove a50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  8 06:21:20 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.878 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[2985efaa-3322-4f6e-927f-4b9206a4ac1f]: (4, ('Wed Oct  8 10:21:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f (a50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8)\na50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8\nWed Oct  8 10:21:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f (a50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8)\na50a99a7b908636d1a3476dbe2c028af766a990ddb805618b5dbf08be82bf4f8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:21:20 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.880 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[949a224b-f569-4a98-a0ce-46811f5c817a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:21:20 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.881 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84428682-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:21:20 np0005475493 nova_compute[262220]: 2025-10-08 10:21:20.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:20 np0005475493 kernel: tap84428682-90: left promiscuous mode
Oct  8 06:21:20 np0005475493 nova_compute[262220]: 2025-10-08 10:21:20.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:20 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.907 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[823d980a-a297-4a29-9408-ebcc6a29f35a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:21:20 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1103: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 108 KiB/s wr, 22 op/s
Oct  8 06:21:20 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.941 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[015ff961-673d-4d62-8ca2-a67a867775da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:21:20 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.943 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[2c7a51f7-6166-4e22-b84f-94c0f983a8e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:21:20 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.971 267781 DEBUG oslo.privsep.daemon [-] privsep: reply[8485fdde-aec7-4c74-8c40-45d6d8d4bdf5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481501, 'reachable_time': 38049, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284100, 'error': None, 'target': 'ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:21:20 np0005475493 systemd[1]: run-netns-ovnmeta\x2d84428682\x2d9eff\x2d4658\x2da105\x2d8c0d1de9c87f.mount: Deactivated successfully.
Oct  8 06:21:20 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.979 163290 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-84428682-9eff-4658-a105-8c0d1de9c87f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  8 06:21:20 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:21:20.979 163290 DEBUG oslo.privsep.daemon [-] privsep: reply[29c487aa-7264-43c5-aec3-f38386bd890a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  8 06:21:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:21:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:21.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:21:21 np0005475493 nova_compute[262220]: 2025-10-08 10:21:21.340 2 INFO nova.virt.libvirt.driver [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Deleting instance files /var/lib/nova/instances/20ffb86b-b5ba-4818-82e4-14a755c48807_del#033[00m
Oct  8 06:21:21 np0005475493 nova_compute[262220]: 2025-10-08 10:21:21.341 2 INFO nova.virt.libvirt.driver [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Deletion of /var/lib/nova/instances/20ffb86b-b5ba-4818-82e4-14a755c48807_del complete#033[00m
Oct  8 06:21:21 np0005475493 nova_compute[262220]: 2025-10-08 10:21:21.514 2 INFO nova.compute.manager [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Took 1.02 seconds to destroy the instance on the hypervisor.#033[00m
Oct  8 06:21:21 np0005475493 nova_compute[262220]: 2025-10-08 10:21:21.515 2 DEBUG oslo.service.loopingcall [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  8 06:21:21 np0005475493 nova_compute[262220]: 2025-10-08 10:21:21.515 2 DEBUG nova.compute.manager [-] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  8 06:21:21 np0005475493 nova_compute[262220]: 2025-10-08 10:21:21.516 2 DEBUG nova.network.neutron [-] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  8 06:21:21 np0005475493 nova_compute[262220]: 2025-10-08 10:21:21.675 2 DEBUG nova.network.neutron [req-82538725-4911-4429-a23d-4de6ec61ad28 req-1affc0b0-140a-41a5-aa07-eeb5cace1008 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Updated VIF entry in instance network info cache for port 754d5578-d995-4502-af66-b164dfdf1189. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  8 06:21:21 np0005475493 nova_compute[262220]: 2025-10-08 10:21:21.676 2 DEBUG nova.network.neutron [req-82538725-4911-4429-a23d-4de6ec61ad28 req-1affc0b0-140a-41a5-aa07-eeb5cace1008 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Updating instance_info_cache with network_info: [{"id": "754d5578-d995-4502-af66-b164dfdf1189", "address": "fa:16:3e:5d:e9:f4", "network": {"id": "84428682-9eff-4658-a105-8c0d1de9c87f", "bridge": "br-int", "label": "tempest-network-smoke--1945963860", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb2c85b88b4b168a4b2e8c5ed4a05c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap754d5578-d9", "ovs_interfaceid": "754d5578-d995-4502-af66-b164dfdf1189", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:21:21 np0005475493 nova_compute[262220]: 2025-10-08 10:21:21.708 2 DEBUG oslo_concurrency.lockutils [req-82538725-4911-4429-a23d-4de6ec61ad28 req-1affc0b0-140a-41a5-aa07-eeb5cace1008 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Releasing lock "refresh_cache-20ffb86b-b5ba-4818-82e4-14a755c48807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  8 06:21:21 np0005475493 nova_compute[262220]: 2025-10-08 10:21:21.964 2 DEBUG nova.compute.manager [req-2754b5d5-13ee-405a-a32d-8e5f05b0f719 req-6ca85ba5-355a-4e33-b765-b94479442c16 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Received event network-vif-unplugged-754d5578-d995-4502-af66-b164dfdf1189 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:21:21 np0005475493 nova_compute[262220]: 2025-10-08 10:21:21.965 2 DEBUG oslo_concurrency.lockutils [req-2754b5d5-13ee-405a-a32d-8e5f05b0f719 req-6ca85ba5-355a-4e33-b765-b94479442c16 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:21:21 np0005475493 nova_compute[262220]: 2025-10-08 10:21:21.965 2 DEBUG oslo_concurrency.lockutils [req-2754b5d5-13ee-405a-a32d-8e5f05b0f719 req-6ca85ba5-355a-4e33-b765-b94479442c16 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:21:21 np0005475493 nova_compute[262220]: 2025-10-08 10:21:21.966 2 DEBUG oslo_concurrency.lockutils [req-2754b5d5-13ee-405a-a32d-8e5f05b0f719 req-6ca85ba5-355a-4e33-b765-b94479442c16 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:21:21 np0005475493 nova_compute[262220]: 2025-10-08 10:21:21.966 2 DEBUG nova.compute.manager [req-2754b5d5-13ee-405a-a32d-8e5f05b0f719 req-6ca85ba5-355a-4e33-b765-b94479442c16 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] No waiting events found dispatching network-vif-unplugged-754d5578-d995-4502-af66-b164dfdf1189 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  8 06:21:21 np0005475493 nova_compute[262220]: 2025-10-08 10:21:21.967 2 DEBUG nova.compute.manager [req-2754b5d5-13ee-405a-a32d-8e5f05b0f719 req-6ca85ba5-355a-4e33-b765-b94479442c16 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Received event network-vif-unplugged-754d5578-d995-4502-af66-b164dfdf1189 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  8 06:21:21 np0005475493 nova_compute[262220]: 2025-10-08 10:21:21.967 2 DEBUG nova.compute.manager [req-2754b5d5-13ee-405a-a32d-8e5f05b0f719 req-6ca85ba5-355a-4e33-b765-b94479442c16 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Received event network-vif-plugged-754d5578-d995-4502-af66-b164dfdf1189 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:21:21 np0005475493 nova_compute[262220]: 2025-10-08 10:21:21.968 2 DEBUG oslo_concurrency.lockutils [req-2754b5d5-13ee-405a-a32d-8e5f05b0f719 req-6ca85ba5-355a-4e33-b765-b94479442c16 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Acquiring lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:21:21 np0005475493 nova_compute[262220]: 2025-10-08 10:21:21.968 2 DEBUG oslo_concurrency.lockutils [req-2754b5d5-13ee-405a-a32d-8e5f05b0f719 req-6ca85ba5-355a-4e33-b765-b94479442c16 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:21:21 np0005475493 nova_compute[262220]: 2025-10-08 10:21:21.969 2 DEBUG oslo_concurrency.lockutils [req-2754b5d5-13ee-405a-a32d-8e5f05b0f719 req-6ca85ba5-355a-4e33-b765-b94479442c16 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:21:21 np0005475493 nova_compute[262220]: 2025-10-08 10:21:21.969 2 DEBUG nova.compute.manager [req-2754b5d5-13ee-405a-a32d-8e5f05b0f719 req-6ca85ba5-355a-4e33-b765-b94479442c16 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] No waiting events found dispatching network-vif-plugged-754d5578-d995-4502-af66-b164dfdf1189 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  8 06:21:21 np0005475493 nova_compute[262220]: 2025-10-08 10:21:21.969 2 WARNING nova.compute.manager [req-2754b5d5-13ee-405a-a32d-8e5f05b0f719 req-6ca85ba5-355a-4e33-b765-b94479442c16 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Received unexpected event network-vif-plugged-754d5578-d995-4502-af66-b164dfdf1189 for instance with vm_state active and task_state deleting.#033[00m
Oct  8 06:21:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:21:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:22.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:21:22 np0005475493 nova_compute[262220]: 2025-10-08 10:21:22.397 2 DEBUG nova.network.neutron [-] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:21:22 np0005475493 nova_compute[262220]: 2025-10-08 10:21:22.414 2 DEBUG nova.compute.manager [req-07b4d789-46eb-412f-b2de-6bfe7c2cf29c req-93df61ff-6998-4b81-a0ff-b19e38ed2d1a 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Received event network-vif-deleted-754d5578-d995-4502-af66-b164dfdf1189 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  8 06:21:22 np0005475493 nova_compute[262220]: 2025-10-08 10:21:22.414 2 INFO nova.compute.manager [req-07b4d789-46eb-412f-b2de-6bfe7c2cf29c req-93df61ff-6998-4b81-a0ff-b19e38ed2d1a 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Neutron deleted interface 754d5578-d995-4502-af66-b164dfdf1189; detaching it from the instance and deleting it from the info cache#033[00m
Oct  8 06:21:22 np0005475493 nova_compute[262220]: 2025-10-08 10:21:22.415 2 DEBUG nova.network.neutron [req-07b4d789-46eb-412f-b2de-6bfe7c2cf29c req-93df61ff-6998-4b81-a0ff-b19e38ed2d1a 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  8 06:21:22 np0005475493 nova_compute[262220]: 2025-10-08 10:21:22.424 2 INFO nova.compute.manager [-] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Took 0.91 seconds to deallocate network for instance.#033[00m
Oct  8 06:21:22 np0005475493 nova_compute[262220]: 2025-10-08 10:21:22.436 2 DEBUG nova.compute.manager [req-07b4d789-46eb-412f-b2de-6bfe7c2cf29c req-93df61ff-6998-4b81-a0ff-b19e38ed2d1a 1c180164f02a49118dbba7cc39b8a4e8 b66e18396ea447c4a1bc7d937fc7b459 - - default default] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Detach interface failed, port_id=754d5578-d995-4502-af66-b164dfdf1189, reason: Instance 20ffb86b-b5ba-4818-82e4-14a755c48807 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  8 06:21:22 np0005475493 nova_compute[262220]: 2025-10-08 10:21:22.507 2 DEBUG oslo_concurrency.lockutils [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:21:22 np0005475493 nova_compute[262220]: 2025-10-08 10:21:22.508 2 DEBUG oslo_concurrency.lockutils [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:21:22 np0005475493 nova_compute[262220]: 2025-10-08 10:21:22.573 2 DEBUG oslo_concurrency.processutils [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:21:22 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1104: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 108 KiB/s wr, 22 op/s
Oct  8 06:21:23 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:21:23 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2926904480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:21:23 np0005475493 nova_compute[262220]: 2025-10-08 10:21:23.074 2 DEBUG oslo_concurrency.processutils [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:21:23 np0005475493 nova_compute[262220]: 2025-10-08 10:21:23.080 2 DEBUG nova.compute.provider_tree [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:21:23 np0005475493 nova_compute[262220]: 2025-10-08 10:21:23.113 2 DEBUG nova.scheduler.client.report [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:21:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:21:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:23.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:21:23 np0005475493 nova_compute[262220]: 2025-10-08 10:21:23.246 2 DEBUG oslo_concurrency.lockutils [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:21:23 np0005475493 nova_compute[262220]: 2025-10-08 10:21:23.323 2 INFO nova.scheduler.client.report [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Deleted allocations for instance 20ffb86b-b5ba-4818-82e4-14a755c48807#033[00m
Oct  8 06:21:23 np0005475493 nova_compute[262220]: 2025-10-08 10:21:23.430 2 DEBUG oslo_concurrency.lockutils [None req-4fca9653-4a5c-48cd-850c-bb69fac83642 d50b19166a7245e390a6e29682191263 0edb2c85b88b4b168a4b2e8c5ed4a05c - - default default] Lock "20ffb86b-b5ba-4818-82e4-14a755c48807" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.946s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:21:23 np0005475493 nova_compute[262220]: 2025-10-08 10:21:23.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:23 np0005475493 podman[284127]: 2025-10-08 10:21:23.947521546 +0000 UTC m=+0.103944928 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct  8 06:21:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:21:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:21:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:21:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:21:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:21:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:24.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.351462) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918884351535, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2126, "num_deletes": 251, "total_data_size": 4146520, "memory_usage": 4215872, "flush_reason": "Manual Compaction"}
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918884373212, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 4005822, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29538, "largest_seqno": 31663, "table_properties": {"data_size": 3996283, "index_size": 5969, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19940, "raw_average_key_size": 20, "raw_value_size": 3977210, "raw_average_value_size": 4083, "num_data_blocks": 257, "num_entries": 974, "num_filter_entries": 974, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759918684, "oldest_key_time": 1759918684, "file_creation_time": 1759918884, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 21804 microseconds, and 8534 cpu microseconds.
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.373268) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 4005822 bytes OK
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.373295) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.374867) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.374882) EVENT_LOG_v1 {"time_micros": 1759918884374877, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.374901) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4137891, prev total WAL file size 4137891, number of live WAL files 2.
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.376086) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3911KB)], [65(11MB)]
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918884376145, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 16135283, "oldest_snapshot_seqno": -1}
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6209 keys, 13992151 bytes, temperature: kUnknown
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918884457766, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 13992151, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13951366, "index_size": 24163, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15557, "raw_key_size": 159114, "raw_average_key_size": 25, "raw_value_size": 13840335, "raw_average_value_size": 2229, "num_data_blocks": 969, "num_entries": 6209, "num_filter_entries": 6209, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759918884, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.458242) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 13992151 bytes
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.459685) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.4 rd, 171.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 11.6 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(7.5) write-amplify(3.5) OK, records in: 6730, records dropped: 521 output_compression: NoCompression
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.459726) EVENT_LOG_v1 {"time_micros": 1759918884459708, "job": 36, "event": "compaction_finished", "compaction_time_micros": 81738, "compaction_time_cpu_micros": 37283, "output_level": 6, "num_output_files": 1, "total_output_size": 13992151, "num_input_records": 6730, "num_output_records": 6209, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918884460869, "job": 36, "event": "table_file_deletion", "file_number": 67}
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759918884463907, "job": 36, "event": "table_file_deletion", "file_number": 65}
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.375952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.463940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.463944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.463961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.463962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:21:24 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:21:24.463964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:21:24 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1105: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 119 KiB/s rd, 111 KiB/s wr, 51 op/s
Oct  8 06:21:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:21:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:25.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:21:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:21:25] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct  8 06:21:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:21:25] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct  8 06:21:25 np0005475493 nova_compute[262220]: 2025-10-08 10:21:25.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:26.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1106: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 15 KiB/s wr, 29 op/s
Oct  8 06:21:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:27.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:27.199Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:21:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:28.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:28 np0005475493 nova_compute[262220]: 2025-10-08 10:21:28.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:28 np0005475493 nova_compute[262220]: 2025-10-08 10:21:28.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:28 np0005475493 nova_compute[262220]: 2025-10-08 10:21:28.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:28.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:21:28 np0005475493 podman[284159]: 2025-10-08 10:21:28.916975794 +0000 UTC m=+0.080689056 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  8 06:21:28 np0005475493 podman[284160]: 2025-10-08 10:21:28.930528302 +0000 UTC m=+0.081072079 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  8 06:21:28 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1107: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 15 KiB/s wr, 30 op/s
Oct  8 06:21:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:21:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:21:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:21:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:21:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:29.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:21:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:21:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:30.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:21:30 np0005475493 nova_compute[262220]: 2025-10-08 10:21:30.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:30 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1108: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Oct  8 06:21:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:31.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:21:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:32.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:21:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:21:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:21:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1109: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Oct  8 06:21:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:21:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:33.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:21:33 np0005475493 nova_compute[262220]: 2025-10-08 10:21:33.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:21:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:21:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:21:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:21:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:34.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:21:34 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1110: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Oct  8 06:21:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:21:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:35.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:21:35 np0005475493 nova_compute[262220]: 2025-10-08 10:21:35.724 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759918880.724088, 20ffb86b-b5ba-4818-82e4-14a755c48807 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  8 06:21:35 np0005475493 nova_compute[262220]: 2025-10-08 10:21:35.725 2 INFO nova.compute.manager [-] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] VM Stopped (Lifecycle Event)#033[00m
Oct  8 06:21:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:21:35] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Oct  8 06:21:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:21:35] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Oct  8 06:21:35 np0005475493 nova_compute[262220]: 2025-10-08 10:21:35.746 2 DEBUG nova.compute.manager [None req-a5e69c99-7c2a-4b47-a998-55e8a3203fa1 - - - - - -] [instance: 20ffb86b-b5ba-4818-82e4-14a755c48807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  8 06:21:35 np0005475493 nova_compute[262220]: 2025-10-08 10:21:35.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:36.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:36 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1111: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:21:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:37.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:37.200Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:21:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:38.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:38 np0005475493 nova_compute[262220]: 2025-10-08 10:21:38.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:38.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:21:38 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1112: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:21:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:21:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:21:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:21:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:21:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:21:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:39.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:21:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:21:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:40.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:40 np0005475493 nova_compute[262220]: 2025-10-08 10:21:40.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1113: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:21:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:21:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:41.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:21:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:42.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1114: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:21:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:43.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:43 np0005475493 nova_compute[262220]: 2025-10-08 10:21:43.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:43 np0005475493 podman[284239]: 2025-10-08 10:21:43.889389723 +0000 UTC m=+0.052680922 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible)
Oct  8 06:21:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:21:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:21:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:21:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:21:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:21:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:44.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:21:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:21:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1115: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:21:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:45.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:21:45] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Oct  8 06:21:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:21:45] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Oct  8 06:21:45 np0005475493 nova_compute[262220]: 2025-10-08 10:21:45.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:21:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:46.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:21:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1116: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:21:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:47.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:47.201Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:21:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:47.201Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:21:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:47.202Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:21:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:21:47
Oct  8 06:21:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:21:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:21:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['default.rgw.log', 'images', '.nfs', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'vms', 'backups', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control']
Oct  8 06:21:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:21:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:21:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:21:47 np0005475493 nova_compute[262220]: 2025-10-08 10:21:47.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:21:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:21:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:21:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:48.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:21:48 np0005475493 nova_compute[262220]: 2025-10-08 10:21:48.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:48.854Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:21:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:48.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:21:48 np0005475493 nova_compute[262220]: 2025-10-08 10:21:48.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:21:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1117: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:21:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:21:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:21:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:21:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:21:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:49.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:21:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:50.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:50 np0005475493 nova_compute[262220]: 2025-10-08 10:21:50.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:50 np0005475493 nova_compute[262220]: 2025-10-08 10:21:50.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:21:50 np0005475493 nova_compute[262220]: 2025-10-08 10:21:50.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  8 06:21:50 np0005475493 nova_compute[262220]: 2025-10-08 10:21:50.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  8 06:21:50 np0005475493 nova_compute[262220]: 2025-10-08 10:21:50.900 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  8 06:21:50 np0005475493 nova_compute[262220]: 2025-10-08 10:21:50.901 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:21:50 np0005475493 nova_compute[262220]: 2025-10-08 10:21:50.901 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:21:50 np0005475493 nova_compute[262220]: 2025-10-08 10:21:50.925 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:21:50 np0005475493 nova_compute[262220]: 2025-10-08 10:21:50.926 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:21:50 np0005475493 nova_compute[262220]: 2025-10-08 10:21:50.926 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:21:50 np0005475493 nova_compute[262220]: 2025-10-08 10:21:50.926 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:21:50 np0005475493 nova_compute[262220]: 2025-10-08 10:21:50.927 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:21:50 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1118: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:21:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:51.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:21:51 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1213713613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:21:51 np0005475493 nova_compute[262220]: 2025-10-08 10:21:51.441 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:21:51 np0005475493 nova_compute[262220]: 2025-10-08 10:21:51.626 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:21:51 np0005475493 nova_compute[262220]: 2025-10-08 10:21:51.627 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4562MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:21:51 np0005475493 nova_compute[262220]: 2025-10-08 10:21:51.628 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:21:51 np0005475493 nova_compute[262220]: 2025-10-08 10:21:51.628 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:21:51 np0005475493 nova_compute[262220]: 2025-10-08 10:21:51.684 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:21:51 np0005475493 nova_compute[262220]: 2025-10-08 10:21:51.684 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:21:51 np0005475493 nova_compute[262220]: 2025-10-08 10:21:51.704 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:21:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:21:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1981282094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:21:52 np0005475493 nova_compute[262220]: 2025-10-08 10:21:52.175 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:21:52 np0005475493 nova_compute[262220]: 2025-10-08 10:21:52.182 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:21:52 np0005475493 nova_compute[262220]: 2025-10-08 10:21:52.219 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:21:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:21:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:52.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:21:52 np0005475493 nova_compute[262220]: 2025-10-08 10:21:52.257 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:21:52 np0005475493 nova_compute[262220]: 2025-10-08 10:21:52.257 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:21:52 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1119: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:21:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:21:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:53.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:21:53 np0005475493 nova_compute[262220]: 2025-10-08 10:21:53.253 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:21:53 np0005475493 nova_compute[262220]: 2025-10-08 10:21:53.254 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:21:53 np0005475493 nova_compute[262220]: 2025-10-08 10:21:53.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:53 np0005475493 nova_compute[262220]: 2025-10-08 10:21:53.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:21:53 np0005475493 nova_compute[262220]: 2025-10-08 10:21:53.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  8 06:21:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:21:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:21:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:21:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:21:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:21:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:54.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:21:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:21:54 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1120: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:21:54 np0005475493 podman[284342]: 2025-10-08 10:21:54.974164289 +0000 UTC m=+0.129308836 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:21:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:55.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:21:55] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct  8 06:21:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:21:55] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct  8 06:21:55 np0005475493 nova_compute[262220]: 2025-10-08 10:21:55.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:56.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:56 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1121: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:21:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:57.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:57.202Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:21:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:21:57.420 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:21:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:21:57.421 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:21:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:21:57.421 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:21:57 np0005475493 nova_compute[262220]: 2025-10-08 10:21:57.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:21:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:21:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:21:58.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:21:58 np0005475493 nova_compute[262220]: 2025-10-08 10:21:58.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:21:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:21:58.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:21:58 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1122: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:21:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:21:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:21:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:21:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:21:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:21:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:21:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:21:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:21:59.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:21:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:21:59 np0005475493 ovn_controller[153187]: 2025-10-08T10:21:59Z|00082|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Oct  8 06:21:59 np0005475493 podman[284374]: 2025-10-08 10:21:59.897923692 +0000 UTC m=+0.055281736 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent)
Oct  8 06:21:59 np0005475493 podman[284373]: 2025-10-08 10:21:59.915891722 +0000 UTC m=+0.077009787 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  8 06:22:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:00.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:00 np0005475493 nova_compute[262220]: 2025-10-08 10:22:00.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:22:00 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1123: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:22:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:22:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:01.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:22:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:22:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:02.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:22:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:22:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:22:02 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1124: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:22:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:03.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:03 np0005475493 nova_compute[262220]: 2025-10-08 10:22:03.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:22:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:22:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:22:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:22:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:22:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:04.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:22:04 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1125: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:22:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:05.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:22:05] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:22:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:22:05] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:22:05 np0005475493 nova_compute[262220]: 2025-10-08 10:22:05.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:22:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:06.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:06 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1126: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:22:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:07.203Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:22:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:07.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:22:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:08.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:22:08 np0005475493 nova_compute[262220]: 2025-10-08 10:22:08.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:22:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:08.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:22:08 np0005475493 podman[284544]: 2025-10-08 10:22:08.896237155 +0000 UTC m=+0.085581725 container exec 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct  8 06:22:08 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1127: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:22:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:22:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:22:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:22:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:22:09 np0005475493 podman[284544]: 2025-10-08 10:22:09.023521875 +0000 UTC m=+0.212866475 container exec_died 01c666addd8584f02eb7d29040d97e45d8cb7f834fd134029c1f7cca550aeb9d (image=quay.io/ceph/ceph:v19, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  8 06:22:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:09.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:22:09 np0005475493 podman[284682]: 2025-10-08 10:22:09.883007638 +0000 UTC m=+0.183320260 container exec 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 06:22:10 np0005475493 podman[284682]: 2025-10-08 10:22:10.066889406 +0000 UTC m=+0.367201918 container exec_died 4eb6b712d09e2820826e6f961f0aba1bff09f13eee1664dbb03edca7615a708b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 06:22:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:22:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:10.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:22:10 np0005475493 podman[284781]: 2025-10-08 10:22:10.660621698 +0000 UTC m=+0.120803702 container exec 90486abb955ec1d9472e9211269572dd99696faaed865d52f07cc20a187b4c4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  8 06:22:10 np0005475493 podman[284801]: 2025-10-08 10:22:10.748248408 +0000 UTC m=+0.060477684 container exec_died 90486abb955ec1d9472e9211269572dd99696faaed865d52f07cc20a187b4c4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct  8 06:22:10 np0005475493 podman[284781]: 2025-10-08 10:22:10.775609631 +0000 UTC m=+0.235791645 container exec_died 90486abb955ec1d9472e9211269572dd99696faaed865d52f07cc20a187b4c4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  8 06:22:10 np0005475493 nova_compute[262220]: 2025-10-08 10:22:10.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:22:10 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1128: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:22:11 np0005475493 podman[284845]: 2025-10-08 10:22:11.048022008 +0000 UTC m=+0.059707590 container exec 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct  8 06:22:11 np0005475493 podman[284845]: 2025-10-08 10:22:11.060501161 +0000 UTC m=+0.072186733 container exec_died 1c113ffcb0d41c6337c256a4892f13e82bdea6ca8062b10324516aa631301ea5 (image=quay.io/ceph/haproxy:2.3, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-haproxy-nfs-cephfs-compute-0-cwhopp)
Oct  8 06:22:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:11.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:11 np0005475493 podman[284915]: 2025-10-08 10:22:11.296287514 +0000 UTC m=+0.056762503 container exec 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, version=2.2.4, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, architecture=x86_64, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Oct  8 06:22:11 np0005475493 podman[284915]: 2025-10-08 10:22:11.308410486 +0000 UTC m=+0.068885445 container exec_died 5814788fb4b63196d097e0c60082efdca22825a3ba337ddec5c99170a1bfd89d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-keepalived-nfs-cephfs-compute-0-ekerbw, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, distribution-scope=public, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.buildah.version=1.28.2, name=keepalived, architecture=x86_64, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct  8 06:22:11 np0005475493 podman[284982]: 2025-10-08 10:22:11.55565834 +0000 UTC m=+0.071397947 container exec feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 06:22:11 np0005475493 podman[284982]: 2025-10-08 10:22:11.600457847 +0000 UTC m=+0.116197414 container exec_died feb968c21a5fda3cd2e7b86379d5576dbe3dadfd4cb62b92318e18ddbaaafb75 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 06:22:11 np0005475493 podman[285059]: 2025-10-08 10:22:11.851537594 +0000 UTC m=+0.056888918 container exec 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 06:22:12 np0005475493 podman[285059]: 2025-10-08 10:22:12.090921703 +0000 UTC m=+0.296273017 container exec_died 73efcf403aa6fcb4b04e1ef714b7bf4169b576e4a4c1b34cdc0b458f62db65fd (image=quay.io/ceph/grafana:10.4.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  8 06:22:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:12.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:12 np0005475493 podman[285171]: 2025-10-08 10:22:12.620786613 +0000 UTC m=+0.075179238 container exec 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 06:22:12 np0005475493 podman[285171]: 2025-10-08 10:22:12.67734604 +0000 UTC m=+0.131738655 container exec_died 50d7285a7766fc915688c3cc737e186f9ad2230d7b0b7e759880375ad996bb6c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-787292cc-8154-50c4-9e00-e9be3e817149-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  8 06:22:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:22:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:22:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:22:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:22:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1129: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:22:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:22:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:13.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:22:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:22:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:22:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:22:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:22:13 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1130: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  8 06:22:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:22:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:22:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:22:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:22:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:22:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:22:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:22:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:22:13 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:22:13 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:22:13 np0005475493 nova_compute[262220]: 2025-10-08 10:22:13.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:22:13 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:22:13 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:22:13 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:22:13 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:22:13 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:22:13 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:22:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:22:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:22:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:22:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:22:14 np0005475493 podman[285388]: 2025-10-08 10:22:14.082676919 +0000 UTC m=+0.044578520 container create c82757e803798211c98c21719bd2d937575665442bfd5837668cde015253c2bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wilbur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:22:14 np0005475493 systemd[1]: Started libpod-conmon-c82757e803798211c98c21719bd2d937575665442bfd5837668cde015253c2bc.scope.
Oct  8 06:22:14 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:22:14 np0005475493 podman[285388]: 2025-10-08 10:22:14.062178577 +0000 UTC m=+0.024080278 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:22:14 np0005475493 podman[285388]: 2025-10-08 10:22:14.16572296 +0000 UTC m=+0.127624591 container init c82757e803798211c98c21719bd2d937575665442bfd5837668cde015253c2bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid)
Oct  8 06:22:14 np0005475493 podman[285388]: 2025-10-08 10:22:14.172874272 +0000 UTC m=+0.134775873 container start c82757e803798211c98c21719bd2d937575665442bfd5837668cde015253c2bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wilbur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct  8 06:22:14 np0005475493 podman[285388]: 2025-10-08 10:22:14.176128657 +0000 UTC m=+0.138030308 container attach c82757e803798211c98c21719bd2d937575665442bfd5837668cde015253c2bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:22:14 np0005475493 condescending_wilbur[285405]: 167 167
Oct  8 06:22:14 np0005475493 systemd[1]: libpod-c82757e803798211c98c21719bd2d937575665442bfd5837668cde015253c2bc.scope: Deactivated successfully.
Oct  8 06:22:14 np0005475493 podman[285388]: 2025-10-08 10:22:14.179174606 +0000 UTC m=+0.141076227 container died c82757e803798211c98c21719bd2d937575665442bfd5837668cde015253c2bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  8 06:22:14 np0005475493 systemd[1]: var-lib-containers-storage-overlay-79ebbc1869eb2ac9bde5ee4bf799a0a9762b51a18f26b4932d4622cee9c70d41-merged.mount: Deactivated successfully.
Oct  8 06:22:14 np0005475493 podman[285388]: 2025-10-08 10:22:14.227446854 +0000 UTC m=+0.189348475 container remove c82757e803798211c98c21719bd2d937575665442bfd5837668cde015253c2bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct  8 06:22:14 np0005475493 systemd[1]: libpod-conmon-c82757e803798211c98c21719bd2d937575665442bfd5837668cde015253c2bc.scope: Deactivated successfully.
Oct  8 06:22:14 np0005475493 podman[285404]: 2025-10-08 10:22:14.23939193 +0000 UTC m=+0.100706913 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  8 06:22:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:22:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:14.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:22:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:22:14 np0005475493 podman[285447]: 2025-10-08 10:22:14.432755714 +0000 UTC m=+0.054312326 container create 721d92f262b1092f9e309c1463a731c3b38107a2f490871c58e54c7e451f6d4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hypatia, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  8 06:22:14 np0005475493 systemd[1]: Started libpod-conmon-721d92f262b1092f9e309c1463a731c3b38107a2f490871c58e54c7e451f6d4c.scope.
Oct  8 06:22:14 np0005475493 podman[285447]: 2025-10-08 10:22:14.411155476 +0000 UTC m=+0.032712178 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:22:14 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:22:14 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615359048ccfdd7ce81456af79e3f618ae9fbfbeca5dbbbbf3d96e99805b6c0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:22:14 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615359048ccfdd7ce81456af79e3f618ae9fbfbeca5dbbbbf3d96e99805b6c0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:22:14 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615359048ccfdd7ce81456af79e3f618ae9fbfbeca5dbbbbf3d96e99805b6c0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:22:14 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615359048ccfdd7ce81456af79e3f618ae9fbfbeca5dbbbbf3d96e99805b6c0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:22:14 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615359048ccfdd7ce81456af79e3f618ae9fbfbeca5dbbbbf3d96e99805b6c0b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:22:14 np0005475493 podman[285447]: 2025-10-08 10:22:14.55680698 +0000 UTC m=+0.178363652 container init 721d92f262b1092f9e309c1463a731c3b38107a2f490871c58e54c7e451f6d4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hypatia, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  8 06:22:14 np0005475493 podman[285447]: 2025-10-08 10:22:14.566402129 +0000 UTC m=+0.187958761 container start 721d92f262b1092f9e309c1463a731c3b38107a2f490871c58e54c7e451f6d4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:22:14 np0005475493 podman[285447]: 2025-10-08 10:22:14.5704641 +0000 UTC m=+0.192020722 container attach 721d92f262b1092f9e309c1463a731c3b38107a2f490871c58e54c7e451f6d4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hypatia, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct  8 06:22:14 np0005475493 quizzical_hypatia[285463]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:22:14 np0005475493 quizzical_hypatia[285463]: --> All data devices are unavailable
Oct  8 06:22:14 np0005475493 systemd[1]: libpod-721d92f262b1092f9e309c1463a731c3b38107a2f490871c58e54c7e451f6d4c.scope: Deactivated successfully.
Oct  8 06:22:14 np0005475493 podman[285447]: 2025-10-08 10:22:14.973300628 +0000 UTC m=+0.594857260 container died 721d92f262b1092f9e309c1463a731c3b38107a2f490871c58e54c7e451f6d4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:22:14 np0005475493 systemd[1]: var-lib-containers-storage-overlay-615359048ccfdd7ce81456af79e3f618ae9fbfbeca5dbbbbf3d96e99805b6c0b-merged.mount: Deactivated successfully.
Oct  8 06:22:15 np0005475493 podman[285447]: 2025-10-08 10:22:15.012515255 +0000 UTC m=+0.634071877 container remove 721d92f262b1092f9e309c1463a731c3b38107a2f490871c58e54c7e451f6d4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hypatia, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct  8 06:22:15 np0005475493 systemd[1]: libpod-conmon-721d92f262b1092f9e309c1463a731c3b38107a2f490871c58e54c7e451f6d4c.scope: Deactivated successfully.
Oct  8 06:22:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:15.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:15 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1131: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  8 06:22:15 np0005475493 podman[285583]: 2025-10-08 10:22:15.732817704 +0000 UTC m=+0.042797143 container create a285891f0ed4efc999687f655ec86f539953f42f7e586f1f1310e7054505670d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_maxwell, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct  8 06:22:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:22:15] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:22:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:22:15] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:22:15 np0005475493 systemd[1]: Started libpod-conmon-a285891f0ed4efc999687f655ec86f539953f42f7e586f1f1310e7054505670d.scope.
Oct  8 06:22:15 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:22:15 np0005475493 podman[285583]: 2025-10-08 10:22:15.805399398 +0000 UTC m=+0.115378927 container init a285891f0ed4efc999687f655ec86f539953f42f7e586f1f1310e7054505670d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct  8 06:22:15 np0005475493 podman[285583]: 2025-10-08 10:22:15.71814231 +0000 UTC m=+0.028121769 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:22:15 np0005475493 podman[285583]: 2025-10-08 10:22:15.822885531 +0000 UTC m=+0.132864970 container start a285891f0ed4efc999687f655ec86f539953f42f7e586f1f1310e7054505670d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:22:15 np0005475493 podman[285583]: 2025-10-08 10:22:15.826375214 +0000 UTC m=+0.136354653 container attach a285891f0ed4efc999687f655ec86f539953f42f7e586f1f1310e7054505670d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_maxwell, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  8 06:22:15 np0005475493 optimistic_maxwell[285599]: 167 167
Oct  8 06:22:15 np0005475493 systemd[1]: libpod-a285891f0ed4efc999687f655ec86f539953f42f7e586f1f1310e7054505670d.scope: Deactivated successfully.
Oct  8 06:22:15 np0005475493 podman[285583]: 2025-10-08 10:22:15.830701345 +0000 UTC m=+0.140680774 container died a285891f0ed4efc999687f655ec86f539953f42f7e586f1f1310e7054505670d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_maxwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  8 06:22:15 np0005475493 nova_compute[262220]: 2025-10-08 10:22:15.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:22:15 np0005475493 systemd[1]: var-lib-containers-storage-overlay-a41d9a9408d9c17a89805be3cf7463349bc94fb893768610c4016c2d600d56f5-merged.mount: Deactivated successfully.
Oct  8 06:22:15 np0005475493 podman[285583]: 2025-10-08 10:22:15.871494421 +0000 UTC m=+0.181473860 container remove a285891f0ed4efc999687f655ec86f539953f42f7e586f1f1310e7054505670d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_maxwell, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:22:15 np0005475493 systemd[1]: libpod-conmon-a285891f0ed4efc999687f655ec86f539953f42f7e586f1f1310e7054505670d.scope: Deactivated successfully.
Oct  8 06:22:16 np0005475493 podman[285623]: 2025-10-08 10:22:16.076659147 +0000 UTC m=+0.046018078 container create dc25fa6968b5ea66e7ee28f243368114eeb8e2e641bbfbd86ec17b34b8abcf4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_engelbart, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:22:16 np0005475493 systemd[1]: Started libpod-conmon-dc25fa6968b5ea66e7ee28f243368114eeb8e2e641bbfbd86ec17b34b8abcf4a.scope.
Oct  8 06:22:16 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:22:16 np0005475493 podman[285623]: 2025-10-08 10:22:16.057373404 +0000 UTC m=+0.026732355 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:22:16 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f23a78ac9ec7bf53311f9974dbcfe3d15f0c0879b4a5dd65e5bf98e321555fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:22:16 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f23a78ac9ec7bf53311f9974dbcfe3d15f0c0879b4a5dd65e5bf98e321555fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:22:16 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f23a78ac9ec7bf53311f9974dbcfe3d15f0c0879b4a5dd65e5bf98e321555fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:22:16 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f23a78ac9ec7bf53311f9974dbcfe3d15f0c0879b4a5dd65e5bf98e321555fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:22:16 np0005475493 podman[285623]: 2025-10-08 10:22:16.169089521 +0000 UTC m=+0.138448492 container init dc25fa6968b5ea66e7ee28f243368114eeb8e2e641bbfbd86ec17b34b8abcf4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_engelbart, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct  8 06:22:16 np0005475493 podman[285623]: 2025-10-08 10:22:16.179331422 +0000 UTC m=+0.148690373 container start dc25fa6968b5ea66e7ee28f243368114eeb8e2e641bbfbd86ec17b34b8abcf4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_engelbart, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  8 06:22:16 np0005475493 podman[285623]: 2025-10-08 10:22:16.182583647 +0000 UTC m=+0.151942578 container attach dc25fa6968b5ea66e7ee28f243368114eeb8e2e641bbfbd86ec17b34b8abcf4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  8 06:22:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:16.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]: {
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:    "1": [
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:        {
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:            "devices": [
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:                "/dev/loop3"
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:            ],
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:            "lv_name": "ceph_lv0",
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:            "lv_size": "21470642176",
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:            "name": "ceph_lv0",
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:            "tags": {
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:                "ceph.cluster_name": "ceph",
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:                "ceph.crush_device_class": "",
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:                "ceph.encrypted": "0",
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:                "ceph.osd_id": "1",
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:                "ceph.type": "block",
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:                "ceph.vdo": "0",
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:                "ceph.with_tpm": "0"
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:            },
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:            "type": "block",
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:            "vg_name": "ceph_vg0"
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:        }
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]:    ]
Oct  8 06:22:16 np0005475493 friendly_engelbart[285639]: }
Oct  8 06:22:16 np0005475493 systemd[1]: libpod-dc25fa6968b5ea66e7ee28f243368114eeb8e2e641bbfbd86ec17b34b8abcf4a.scope: Deactivated successfully.
Oct  8 06:22:16 np0005475493 podman[285623]: 2025-10-08 10:22:16.493820817 +0000 UTC m=+0.463179748 container died dc25fa6968b5ea66e7ee28f243368114eeb8e2e641bbfbd86ec17b34b8abcf4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_engelbart, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  8 06:22:16 np0005475493 systemd[1]: var-lib-containers-storage-overlay-0f23a78ac9ec7bf53311f9974dbcfe3d15f0c0879b4a5dd65e5bf98e321555fe-merged.mount: Deactivated successfully.
Oct  8 06:22:16 np0005475493 podman[285623]: 2025-10-08 10:22:16.533159037 +0000 UTC m=+0.502517968 container remove dc25fa6968b5ea66e7ee28f243368114eeb8e2e641bbfbd86ec17b34b8abcf4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  8 06:22:16 np0005475493 systemd[1]: libpod-conmon-dc25fa6968b5ea66e7ee28f243368114eeb8e2e641bbfbd86ec17b34b8abcf4a.scope: Deactivated successfully.
Oct  8 06:22:17 np0005475493 podman[285751]: 2025-10-08 10:22:17.131816148 +0000 UTC m=+0.045286163 container create 95b9b0940bc116f2e3949f4642de8c870a7fb1a3e6f63902d450c72dcc70a93a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_kare, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:22:17 np0005475493 systemd[1]: Started libpod-conmon-95b9b0940bc116f2e3949f4642de8c870a7fb1a3e6f63902d450c72dcc70a93a.scope.
Oct  8 06:22:17 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:22:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:17.203Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:22:17 np0005475493 podman[285751]: 2025-10-08 10:22:17.112474734 +0000 UTC m=+0.025944769 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:22:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:17.203Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:22:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:17.207Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:22:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:17.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:17 np0005475493 podman[285751]: 2025-10-08 10:22:17.228940084 +0000 UTC m=+0.142410189 container init 95b9b0940bc116f2e3949f4642de8c870a7fb1a3e6f63902d450c72dcc70a93a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_kare, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:22:17 np0005475493 podman[285751]: 2025-10-08 10:22:17.23655047 +0000 UTC m=+0.150020475 container start 95b9b0940bc116f2e3949f4642de8c870a7fb1a3e6f63902d450c72dcc70a93a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_kare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:22:17 np0005475493 podman[285751]: 2025-10-08 10:22:17.240383744 +0000 UTC m=+0.153853849 container attach 95b9b0940bc116f2e3949f4642de8c870a7fb1a3e6f63902d450c72dcc70a93a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_kare, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  8 06:22:17 np0005475493 zen_kare[285767]: 167 167
Oct  8 06:22:17 np0005475493 systemd[1]: libpod-95b9b0940bc116f2e3949f4642de8c870a7fb1a3e6f63902d450c72dcc70a93a.scope: Deactivated successfully.
Oct  8 06:22:17 np0005475493 podman[285751]: 2025-10-08 10:22:17.243802695 +0000 UTC m=+0.157272740 container died 95b9b0940bc116f2e3949f4642de8c870a7fb1a3e6f63902d450c72dcc70a93a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct  8 06:22:17 np0005475493 systemd[1]: var-lib-containers-storage-overlay-3af8dc3807c1fe1dcb5b9069f803bfc480d8c3a11bcda3e63d4174f57cd1a1bb-merged.mount: Deactivated successfully.
Oct  8 06:22:17 np0005475493 podman[285751]: 2025-10-08 10:22:17.280262512 +0000 UTC m=+0.193732517 container remove 95b9b0940bc116f2e3949f4642de8c870a7fb1a3e6f63902d450c72dcc70a93a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_kare, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  8 06:22:17 np0005475493 systemd[1]: libpod-conmon-95b9b0940bc116f2e3949f4642de8c870a7fb1a3e6f63902d450c72dcc70a93a.scope: Deactivated successfully.
Oct  8 06:22:17 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1132: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  8 06:22:17 np0005475493 podman[285792]: 2025-10-08 10:22:17.514402422 +0000 UTC m=+0.068967148 container create ef758e3a06958bf7ce8321a54b5a6f648faccc3d2f35d2e4796060fe7a0f6dca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  8 06:22:17 np0005475493 podman[285792]: 2025-10-08 10:22:17.482257615 +0000 UTC m=+0.036822431 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:22:17 np0005475493 systemd[1]: Started libpod-conmon-ef758e3a06958bf7ce8321a54b5a6f648faccc3d2f35d2e4796060fe7a0f6dca.scope.
Oct  8 06:22:17 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:22:17 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61b6f9d91a5a6d4b064abb155d2bda1388360b25a5b8179e1fc86c141a26fdb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:22:17 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61b6f9d91a5a6d4b064abb155d2bda1388360b25a5b8179e1fc86c141a26fdb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:22:17 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61b6f9d91a5a6d4b064abb155d2bda1388360b25a5b8179e1fc86c141a26fdb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:22:17 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61b6f9d91a5a6d4b064abb155d2bda1388360b25a5b8179e1fc86c141a26fdb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:22:17 np0005475493 podman[285792]: 2025-10-08 10:22:17.628832097 +0000 UTC m=+0.183396803 container init ef758e3a06958bf7ce8321a54b5a6f648faccc3d2f35d2e4796060fe7a0f6dca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  8 06:22:17 np0005475493 podman[285792]: 2025-10-08 10:22:17.636863346 +0000 UTC m=+0.191428062 container start ef758e3a06958bf7ce8321a54b5a6f648faccc3d2f35d2e4796060fe7a0f6dca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:22:17 np0005475493 podman[285792]: 2025-10-08 10:22:17.64069 +0000 UTC m=+0.195254726 container attach ef758e3a06958bf7ce8321a54b5a6f648faccc3d2f35d2e4796060fe7a0f6dca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_babbage, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:22:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:22:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:22:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:22:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:22:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:22:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:22:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:22:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:22:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:18.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:18 np0005475493 lvm[285884]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:22:18 np0005475493 lvm[285884]: VG ceph_vg0 finished
Oct  8 06:22:18 np0005475493 compassionate_babbage[285809]: {}
Oct  8 06:22:18 np0005475493 systemd[1]: libpod-ef758e3a06958bf7ce8321a54b5a6f648faccc3d2f35d2e4796060fe7a0f6dca.scope: Deactivated successfully.
Oct  8 06:22:18 np0005475493 systemd[1]: libpod-ef758e3a06958bf7ce8321a54b5a6f648faccc3d2f35d2e4796060fe7a0f6dca.scope: Consumed 1.283s CPU time.
Oct  8 06:22:18 np0005475493 podman[285887]: 2025-10-08 10:22:18.477431929 +0000 UTC m=+0.026966571 container died ef758e3a06958bf7ce8321a54b5a6f648faccc3d2f35d2e4796060fe7a0f6dca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_babbage, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  8 06:22:18 np0005475493 systemd[1]: var-lib-containers-storage-overlay-61b6f9d91a5a6d4b064abb155d2bda1388360b25a5b8179e1fc86c141a26fdb4-merged.mount: Deactivated successfully.
Oct  8 06:22:18 np0005475493 podman[285887]: 2025-10-08 10:22:18.52732542 +0000 UTC m=+0.076860072 container remove ef758e3a06958bf7ce8321a54b5a6f648faccc3d2f35d2e4796060fe7a0f6dca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_babbage, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:22:18 np0005475493 systemd[1]: libpod-conmon-ef758e3a06958bf7ce8321a54b5a6f648faccc3d2f35d2e4796060fe7a0f6dca.scope: Deactivated successfully.
Oct  8 06:22:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:22:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:22:18 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:22:18 np0005475493 nova_compute[262220]: 2025-10-08 10:22:18.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:22:18 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:22:18 np0005475493 systemd-logind[798]: New session 58 of user zuul.
Oct  8 06:22:18 np0005475493 systemd[1]: Started Session 58 of User zuul.
Oct  8 06:22:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:18.857Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:22:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:18.857Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:22:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:18.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:22:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:22:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:22:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:22:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:22:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:19.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:22:19 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1133: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  8 06:22:19 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:22:19 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:22:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:22:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:20.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:22:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  8 06:22:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3627942894' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  8 06:22:20 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  8 06:22:20 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3627942894' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  8 06:22:20 np0005475493 nova_compute[262220]: 2025-10-08 10:22:20.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:22:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:21.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:21 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16275 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:21 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26111 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:21 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1134: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  8 06:22:21 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.25912 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:21 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16287 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:21 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26123 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:22 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.25921 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:22 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct  8 06:22:22 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2372707832' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  8 06:22:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:22.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:23.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:23 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1135: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  8 06:22:23 np0005475493 nova_compute[262220]: 2025-10-08 10:22:23.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:22:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:22:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:22:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:22:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:22:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:24.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:22:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:25.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:25 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1136: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:22:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:22:25] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct  8 06:22:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:22:25] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct  8 06:22:25 np0005475493 nova_compute[262220]: 2025-10-08 10:22:25.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:22:25 np0005475493 podman[286268]: 2025-10-08 10:22:25.920563494 +0000 UTC m=+0.079067334 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  8 06:22:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:22:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:26.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:22:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:27.208Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:22:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:27.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:27 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1137: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:22:27 np0005475493 ovs-vsctl[286326]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct  8 06:22:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:28.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:28 np0005475493 virtqemud[261885]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct  8 06:22:28 np0005475493 virtqemud[261885]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct  8 06:22:28 np0005475493 virtqemud[261885]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  8 06:22:28 np0005475493 nova_compute[262220]: 2025-10-08 10:22:28.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:22:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:28.859Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:22:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:28.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:22:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:22:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:22:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:22:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:22:29 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: cache status {prefix=cache status} (starting...)
Oct  8 06:22:29 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct  8 06:22:29 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: client ls {prefix=client ls} (starting...)
Oct  8 06:22:29 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct  8 06:22:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:29.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:29 np0005475493 lvm[286684]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:22:29 np0005475493 lvm[286684]: VG ceph_vg0 finished
Oct  8 06:22:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:22:29 np0005475493 kernel: block sr0: the capability attribute has been deprecated.
Oct  8 06:22:29 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1138: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:22:29 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26138 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:29 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: damage ls {prefix=damage ls} (starting...)
Oct  8 06:22:29 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct  8 06:22:29 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16308 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct  8 06:22:29 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  8 06:22:30 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: dump loads {prefix=dump loads} (starting...)
Oct  8 06:22:30 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct  8 06:22:30 np0005475493 podman[286844]: 2025-10-08 10:22:30.02317628 +0000 UTC m=+0.075606933 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  8 06:22:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct  8 06:22:30 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3452932124' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  8 06:22:30 np0005475493 podman[286845]: 2025-10-08 10:22:30.042704401 +0000 UTC m=+0.095218916 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Oct  8 06:22:30 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26150 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:30 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct  8 06:22:30 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct  8 06:22:30 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16323 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:30 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct  8 06:22:30 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct  8 06:22:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:30.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:30 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct  8 06:22:30 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct  8 06:22:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:22:30 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1219825122' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:22:30 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26165 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:30 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.25957 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:30 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct  8 06:22:30 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct  8 06:22:30 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16341 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct  8 06:22:30 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  8 06:22:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Oct  8 06:22:30 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4093340126' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct  8 06:22:30 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct  8 06:22:30 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct  8 06:22:30 np0005475493 nova_compute[262220]: 2025-10-08 10:22:30.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:22:30 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.25978 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:30 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26183 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:31 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct  8 06:22:31 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct  8 06:22:31 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16356 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:31.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:31 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: ops {prefix=ops} (starting...)
Oct  8 06:22:31 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct  8 06:22:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Oct  8 06:22:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1177024407' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct  8 06:22:31 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26201 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:31 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1139: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:22:31 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26216 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:31 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16377 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Oct  8 06:22:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2799919786' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct  8 06:22:31 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26008 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:31 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: session ls {prefix=session ls} (starting...)
Oct  8 06:22:31 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct  8 06:22:32 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: status {prefix=status} (starting...)
Oct  8 06:22:32 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16392 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:32 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26237 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:32.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:32 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26264 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct  8 06:22:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/932372394' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  8 06:22:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct  8 06:22:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  8 06:22:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct  8 06:22:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1683114755' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  8 06:22:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:22:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:22:32 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26065 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Oct  8 06:22:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1703494652' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct  8 06:22:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct  8 06:22:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/238833166' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  8 06:22:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:22:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:33.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:22:33 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16434 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:33 np0005475493 ceph-mgr[73869]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  8 06:22:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T10:22:33.382+0000 7fa108681640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  8 06:22:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct  8 06:22:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  8 06:22:33 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26303 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:33 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T10:22:33.440+0000 7fa108681640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  8 06:22:33 np0005475493 ceph-mgr[73869]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  8 06:22:33 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1140: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:22:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  8 06:22:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3246441215' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  8 06:22:33 np0005475493 nova_compute[262220]: 2025-10-08 10:22:33.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:22:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Oct  8 06:22:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3831085452' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct  8 06:22:33 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct  8 06:22:33 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1629341640' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  8 06:22:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:22:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:22:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:22:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:22:34 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26122 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T10:22:34.267+0000 7fa108681640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  8 06:22:34 np0005475493 ceph-mgr[73869]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  8 06:22:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:22:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:34.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:22:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct  8 06:22:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1192288068' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  8 06:22:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:22:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Oct  8 06:22:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2790329669' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct  8 06:22:34 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26360 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:34 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16476 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct  8 06:22:34 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1739770367' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  8 06:22:34 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26378 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:35 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16488 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:35.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct  8 06:22:35 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/410494144' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  8 06:22:35 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26393 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 4907008 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 4907008 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 4898816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989380 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 4898816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 4898816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2c59ac00 session 0x559f2dc09680
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37f400 session 0x559f2d5eeb40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 4890624 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 4890624 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4874240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989380 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4866048 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 4849664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.693235397s of 19.816581726s, submitted: 2
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 4841472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 4841472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 4833280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989512 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 4833280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 4833280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4825088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4825088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 4816896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989644 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 4816896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 4808704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 4808704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.952174187s of 10.981819153s, submitted: 2
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83238912 unmapped: 4800512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83238912 unmapped: 4800512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990565 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83238912 unmapped: 4800512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 4792320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 4792320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83255296 unmapped: 4784128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 4775936 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990433 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 4767744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 4767744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 4767744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83279872 unmapped: 4759552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9400 session 0x559f2d5ef0e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37e800 session 0x559f2dbe85a0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83279872 unmapped: 4759552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990301 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 4751360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 4743168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 4734976 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 4734976 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 4734976 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990301 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 4726784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 4726784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 4718592 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 4718592 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.046033859s of 21.209218979s, submitted: 4
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 4702208 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990433 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 4702208 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 4702208 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 4694016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 4694016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 4694016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991945 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 4685824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 4685824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 4677632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 4677632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 4669440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991354 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 4669440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 4653056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 4653056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 4653056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 4644864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991354 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 4644864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 4636672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.658838272s of 17.800985336s, submitted: 3
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 4628480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 4628480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 4612096 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 4612096 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 4612096 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 4603904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 4603904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4595712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4595712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4595712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4587520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4587520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4579328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4579328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4571136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4571136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 4562944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 4562944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4546560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4546560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 4538368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 4530176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 4530176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 4521984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 4513792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 4513792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4505600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4505600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4497408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4497408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4497408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4489216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4489216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4481024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4481024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 4472832 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 4472832 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 4472832 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 4456448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 4456448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 4448256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 4448256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 4448256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 4440064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 4440064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 4431872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 4431872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 4407296 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 4399104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 4390912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 4390912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 4390912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 4390912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 4382720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 4382720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 4374528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 4374528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 4366336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 4366336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 4366336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 4358144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 4358144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 4349952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 4341760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 4333568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 4333568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 4333568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 4325376 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 4317184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 4308992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 4308992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 4308992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 4300800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 4292608 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 4284416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 4284416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 4276224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 4276224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 4276224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 4268032 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 4268032 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 4259840 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 4259840 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 4251648 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83795968 unmapped: 4243456 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83795968 unmapped: 4243456 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 4235264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 4235264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 4227072 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 4227072 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 4227072 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 4218880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 4210688 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 4202496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 4202496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 4202496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 4194304 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 4194304 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 4186112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 4186112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 4186112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 4177920 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 4169728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 4161536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 4161536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 4153344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 4153344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 4153344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 4136960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 4128768 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 4128768 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 4128768 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 4120576 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 4112384 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 4112384 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 4104192 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 4104192 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 4096000 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 4096000 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 4087808 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 4087808 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 4087808 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 4071424 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 4071424 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 4063232 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 4063232 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 4063232 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 4055040 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 4038656 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 4030464 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 4030464 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 4022272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 4022272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 4022272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 4014080 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 4014080 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 4005888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 3997696 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 3997696 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 3989504 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 3989504 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 3981312 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 3981312 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 3973120 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 3973120 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 3973120 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 3964928 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 3964928 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 3956736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 3956736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 3948544 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 3940352 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 3940352 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3932160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 3923968 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 3923968 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 3915776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 3907584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 3891200 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 3883008 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 3883008 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d03d400 session 0x559f2c8b10e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 3874816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 3874816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 3874816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 3866624 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 3866624 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 3858432 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 3858432 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991222 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 3850240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 3850240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 3850240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 3842048 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 172.042541504s of 172.046844482s, submitted: 1
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 3833856 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991354 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 3817472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 3817472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 3817472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 3809280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 3809280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992866 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 3801088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 3801088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 3792896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 8245 writes, 33K keys, 8245 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 8245 writes, 1525 syncs, 5.41 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8245 writes, 33K keys, 8245 commit groups, 1.0 writes per commit group, ingest: 21.32 MB, 0.04 MB/s#012Interval WAL: 8245 writes, 1525 syncs, 5.41 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 3710976 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 3694592 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992275 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 3686400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 3686400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 3678208 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 3678208 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84369408 unmapped: 3670016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.031002045s of 16.042385101s, submitted: 3
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992143 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 3661824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 3653632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 3653632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 3645440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 3645440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992143 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 3645440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992143 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 3604480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992143 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 3604480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 3588096 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84459520 unmapped: 3579904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992143 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9800 session 0x559f2dbe8d20
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9000 session 0x559f2c8afa40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992143 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37f400 session 0x559f2cc78d20
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37e800 session 0x559f2d9601e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992143 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.213760376s of 32.217681885s, submitted: 1
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993787 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995431 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 3473408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 3473408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.187956810s of 12.277172089s, submitted: 4
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 3473408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996943 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3448832 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995629 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995497 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995497 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 3399680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 3399680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 3383296 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9400 session 0x559f2d9534a0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d680c00 session 0x559f2a974f00
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995497 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 3383296 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3366912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995497 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3366912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3366912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3358720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fca5d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3358720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.902038574s of 29.934965134s, submitted: 5
Oct  8 06:22:35 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26164 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 3317760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995569 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 3301376 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 3268608 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,1])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 3104768 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 3039232 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995497 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 3039232 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 2998272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997141 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 2998272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 2998272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.192111969s of 12.802393913s, submitted: 341
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 2998272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 2990080 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 2990080 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997930 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997930 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997930 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 2973696 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997930 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 2973696 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997930 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 2949120 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 2940928 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 2940928 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37f400 session 0x559f2c648960
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37e800 session 0x559f2d9612c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 2940928 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997930 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997930 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.170238495s of 37.409439087s, submitted: 3
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998062 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 2932736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 2924544 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 2924544 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 2924544 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999574 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 2924544 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 2924544 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 2924544 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 2924544 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998983 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.123427391s of 12.157036781s, submitted: 3
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998260 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998260 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 2899968 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 2899968 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 2899968 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 2899968 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998260 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 2899968 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 2899968 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 2899968 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 2899968 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 2891776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998260 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 2891776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 2891776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 2891776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8c00 session 0x559f2d960d20
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2c59ac00 session 0x559f2da1d2c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 2891776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 2891776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998260 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 2891776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 2891776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 2891776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 2883584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 2883584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998260 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 2883584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 2883584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 2883584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 2883584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.183380127s of 33.190643311s, submitted: 2
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 2883584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998392 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 2883584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 2883584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 2883584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 2883584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 2875392 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998392 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 2875392 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 2875392 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 2875392 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 2875392 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 2859008 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998392 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 2859008 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.049246788s of 12.226043701s, submitted: 1
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37f400 session 0x559f2cbf63c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37e800 session 0x559f2d82eb40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.388336182s of 20.395538330s, submitted: 2
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 2850816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 2842624 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 2842624 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 2842624 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997801 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 2842624 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997801 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.363765717s of 12.369489670s, submitted: 1
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 2818048 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 2818048 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 2818048 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 2809856 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 2809856 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 2801664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 2801664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 2801664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 2801664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 2801664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8400 session 0x559f2d953680
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8800 session 0x559f2dbe94a0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.191692352s of 38.196037292s, submitted: 1
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999313 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000825 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000825 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.237012863s of 15.247964859s, submitted: 3
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8c00 session 0x559f2d82f2c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.051769257s of 39.055622101s, submitted: 1
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d961680
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d680c00 session 0x559f2a95b680
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000825 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1141: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000825 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.737722397s of 12.740792274s, submitted: 1
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000957 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86335488 unmapped: 1703936 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000234 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000234 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.218849182s of 12.235140800s, submitted: 3
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000234 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000102 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000102 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000102 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c4243c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9000 session 0x559f2d82e3c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000102 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000102 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread fragmentation_score=0.000031 took=0.000080s
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 2711552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 2711552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 2711552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 2711552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000102 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.856376648s of 34.864582062s, submitted: 2
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 2711552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 2711552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37e000 session 0x559f2a9a3a40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001746 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001746 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9000 session 0x559f2d9534a0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37e800 session 0x559f2d9612c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.321186066s of 12.326921463s, submitted: 2
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001614 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001614 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001746 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 2686976 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 2686976 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 2686976 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.701647758s of 15.710140228s, submitted: 2
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003258 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002667 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8800 session 0x559f2a9703c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002535 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002535 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 2670592 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.336950302s of 21.398941040s, submitted: 3
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002667 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 2662400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 2662400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 2662400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 2662400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 2662400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004179 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 2662400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005100 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.527006149s of 15.556138039s, submitted: 4
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004968 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004968 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004968 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d5b2000 session 0x559f2cadd2c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d680c00 session 0x559f2d961c20
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004968 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004968 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.097640991s of 22.100765228s, submitted: 1
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005100 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005100 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.092028618s of 12.143527031s, submitted: 2
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003918 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003786 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003786 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003786 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8800 session 0x559f2a9550e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37e800 session 0x559f2d82fe00
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003786 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: mgrc ms_handle_reset ms_handle_reset con 0x559f2abaa000
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3802415056
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3802415056,v1:192.168.122.100:6801/3802415056]
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: mgrc handle_mgr_configure stats_period=5
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003786 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.943304062s of 30.003890991s, submitted: 2
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003918 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005430 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005430 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005430 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.837564468s of 17.844263077s, submitted: 2
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d82e1e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d82ef00
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005298 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005298 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.888109207s of 10.891509056s, submitted: 1
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005430 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86499328 unmapped: 1540096 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006942 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.085538864s of 12.127921104s, submitted: 2
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006351 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006219 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d680c00 session 0x559f2d5ef2c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9000 session 0x559f2c5c8b40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006219 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006219 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006219 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.218805313s of 21.227340698s, submitted: 2
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006351 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007863 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.160308838s of 12.177426338s, submitted: 2
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007272 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007140 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 1515520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d5b2400 session 0x559f2da1f0e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d5b2000 session 0x559f2a8670e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 1515520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 1515520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 1515520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007140 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 1515520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 1515520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 1507328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 1507328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 1507328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007140 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 1507328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 1507328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 1507328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.320930481s of 20.461774826s, submitted: 2
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007272 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 9009 writes, 35K keys, 9009 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 9009 writes, 1887 syncs, 4.77 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 764 writes, 1222 keys, 764 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s#012Interval WAL: 764 writes, 362 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008784 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.007425308s of 10.107902527s, submitted: 3
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009705 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 1474560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 1474560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 1474560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 1474560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 1474560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 1474560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8400 session 0x559f2d70cb40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37f400 session 0x559f2d5ee1e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 45.391696930s of 45.435684204s, submitted: 2
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009705 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011217 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86589440 unmapped: 1449984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010035 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.340482712s of 13.399305344s, submitted: 4
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.399309158s of 14.402190208s, submitted: 1
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86638592 unmapped: 1400832 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,0,0,4])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,0,0,1,2])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86614016 unmapped: 1425408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009975 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86794240 unmapped: 2293760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37e800 session 0x559f2dc09680
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d680c00 session 0x559f2d9612c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16503 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8400 session 0x559f2c8afe00
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37f400 session 0x559f2d953a40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 53.254104614s of 57.032154083s, submitted: 332
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010035 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010035 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010167 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.976808548s of 16.986804962s, submitted: 2
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010035 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d70de00
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8800 session 0x559f2d554960
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010035 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.073468208s of 12.254982948s, submitted: 3
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010035 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011547 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 2228224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 2228224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 2228224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011547 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 2228224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 2228224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 2228224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.275589943s of 15.385351181s, submitted: 2
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d5b2400 session 0x559f2d82e960
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d5b2000 session 0x559f2d82ef00
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 40.478878021s of 40.551963806s, submitted: 1
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 2203648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 2203648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011547 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 2203648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 2203648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 2203648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 2195456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013059 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.107625008s of 12.130958557s, submitted: 2
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012468 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012336 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012336 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012336 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012336 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.331020355s of 23.338811874s, submitted: 2
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 2179072 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 2154496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021781 data_alloc: 218103808 data_used: 167936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 151 handle_osd_map epochs [151,151], i have 151, src has [1,151]
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x107e4e/0x1c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,1])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 2146304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 151 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d952960
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 2146304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 151 ms_handle_reset con 0x559f2d0e8400 session 0x559f2d5ee1e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 151 ms_handle_reset con 0x559f2b37f400 session 0x559f2d5ef2c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 18898944 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 18898944 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 152 ms_handle_reset con 0x559f2d680c00 session 0x559f2d555680
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fbe3e000/0x0/0x4ffc00000, data 0x90c0a4/0x9ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 18898944 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083662 data_alloc: 218103808 data_used: 176128
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 18898944 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 18898944 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3a000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087260 data_alloc: 218103808 data_used: 176128
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3a000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.067012787s of 14.482573509s, submitted: 64
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087392 data_alloc: 218103808 data_used: 176128
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3b000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 18882560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089576 data_alloc: 218103808 data_used: 176128
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 18882560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 18882560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3b000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 18882560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 18882560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 18882560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.071710587s of 12.114167213s, submitted: 3
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088985 data_alloc: 218103808 data_used: 176128
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 17833984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 17833984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3b000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 17833984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088853 data_alloc: 218103808 data_used: 176128
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3b000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088853 data_alloc: 218103808 data_used: 176128
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 18857984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3b000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 18857984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 18857984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088853 data_alloc: 218103808 data_used: 176128
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 18857984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 18857984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 153 ms_handle_reset con 0x559f2d5b2000 session 0x559f2d6370e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 153 ms_handle_reset con 0x559f2d5b2400 session 0x559f2dbe8b40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 153 ms_handle_reset con 0x559f2d3c4800 session 0x559f2c5dfc20
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 18849792 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 153 ms_handle_reset con 0x559f2d5b2800 session 0x559f2a866000
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3b000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 153 ms_handle_reset con 0x559f2d3c4800 session 0x559f2a975680
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 18849792 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 153 ms_handle_reset con 0x559f2d5b2000 session 0x559f2a95bc20
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 18849792 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.177728653s of 20.183889389s, submitted: 2
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092771 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 18849792 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 155 ms_handle_reset con 0x559f2d5b2400 session 0x559f2b6512c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 155 ms_handle_reset con 0x559f2d680c00 session 0x559f2a9a3e00
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 155 ms_handle_reset con 0x559f2d5b2c00 session 0x559f2d960780
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 155 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d82fa40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 155 ms_handle_reset con 0x559f2d5b2000 session 0x559f2d554780
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fb528000/0x0/0x4ffc00000, data 0x121c314/0x12e3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 18595840 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 18595840 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 155 ms_handle_reset con 0x559f2d5b2400 session 0x559f2dbe81e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165866 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87302144 unmapped: 18571264 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87318528 unmapped: 18554880 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb528000/0x0/0x4ffc00000, data 0x121c337/0x12e4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 95657984 unmapped: 10215424 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97189888 unmapped: 8683520 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97189888 unmapped: 8683520 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233556 data_alloc: 234881024 data_used: 9666560
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 8667136 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb524000/0x0/0x4ffc00000, data 0x121e309/0x12e7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233556 data_alloc: 234881024 data_used: 9666560
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb524000/0x0/0x4ffc00000, data 0x121e309/0x12e7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.378948212s of 17.598480225s, submitted: 58
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103514112 unmapped: 8765440 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102342656 unmapped: 9936896 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346036 data_alloc: 234881024 data_used: 10461184
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102342656 unmapped: 9936896 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102498304 unmapped: 9781248 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 9773056 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e8400 session 0x559f2da1f0e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9000 session 0x559f2d555e00
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 9773056 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 9773056 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346036 data_alloc: 234881024 data_used: 10461184
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 9773056 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 9773056 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 9773056 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102547456 unmapped: 9732096 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102547456 unmapped: 9732096 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346948 data_alloc: 234881024 data_used: 10530816
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102547456 unmapped: 9732096 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102547456 unmapped: 9732096 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102555648 unmapped: 9723904 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102555648 unmapped: 9723904 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.637916565s of 16.192432404s, submitted: 74
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102555648 unmapped: 9723904 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347080 data_alloc: 234881024 data_used: 10530816
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102555648 unmapped: 9723904 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9000 session 0x559f2a9543c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d953a40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2000 session 0x559f2d952960
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2d82e960
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d554b40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2a954b40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102547456 unmapped: 9732096 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2d82fe00
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2d82ef00
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102604800 unmapped: 9674752 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9000 session 0x559f2d554960
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2dbe90e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d6370e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2c5fc960
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2a9a3e00
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 9101312 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9409000/0x0/0x4ffc00000, data 0x2199319/0x2263000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 9101312 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d82e000
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367280 data_alloc: 234881024 data_used: 10534912
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 9101312 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 9101312 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2c424000
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 9101312 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2000 session 0x559f2cbf7680
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2d70d2c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102727680 unmapped: 9551872 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102727680 unmapped: 9551872 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9408000/0x0/0x4ffc00000, data 0x219933c/0x2264000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1378901 data_alloc: 234881024 data_used: 11943936
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103833600 unmapped: 8445952 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103833600 unmapped: 8445952 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.715806007s of 13.765681267s, submitted: 16
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103833600 unmapped: 8445952 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103833600 unmapped: 8445952 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103841792 unmapped: 8437760 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9408000/0x0/0x4ffc00000, data 0x219933c/0x2264000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1378853 data_alloc: 234881024 data_used: 11948032
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103874560 unmapped: 8404992 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103874560 unmapped: 8404992 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9408000/0x0/0x4ffc00000, data 0x219933c/0x2264000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103907328 unmapped: 8372224 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103907328 unmapped: 8372224 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103907328 unmapped: 8372224 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1379774 data_alloc: 234881024 data_used: 11948032
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103907328 unmapped: 8372224 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9408000/0x0/0x4ffc00000, data 0x219933c/0x2264000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103907328 unmapped: 8372224 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.078499794s of 10.066446304s, submitted: 47
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 3858432 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 3768320 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 4235264 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423758 data_alloc: 234881024 data_used: 13017088
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 4235264 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 4235264 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e52000/0x0/0x4ffc00000, data 0x274f33c/0x281a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e52000/0x0/0x4ffc00000, data 0x274f33c/0x281a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 4235264 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 4235264 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 4235264 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1422330 data_alloc: 234881024 data_used: 13017088
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108322816 unmapped: 3956736 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108322816 unmapped: 3956736 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e31000/0x0/0x4ffc00000, data 0x277033c/0x283b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2a974000
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.945456505s of 10.029915810s, submitted: 20
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 5545984 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2d8d0960
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a6000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352840 data_alloc: 234881024 data_used: 10534912
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a6000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352840 data_alloc: 234881024 data_used: 10534912
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2c00 session 0x559f2d555c20
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2c36be00
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2d5ef4a0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 11517952 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118844 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118844 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118844 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118844 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118844 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2c8b0b40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c8b03c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2c8b0d20
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2c00 session 0x559f2b2d8b40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2b2d8000
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2a999e00
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.161369324s of 37.341365814s, submitted: 63
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2a9983c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2a996b40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3400 session 0x559f2a9974a0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3400 session 0x559f2a958960
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2a9583c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198144 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 26476544 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 26476544 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa225000/0x0/0x4ffc00000, data 0x1380284/0x1447000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 26476544 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2a9703c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 26476544 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 26476544 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2d5ef4a0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198144 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 26476544 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2d5ee5a0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2d5ee1e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 26755072 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 26755072 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2000 session 0x559f2b2d92c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d82fa40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x1380294/0x1448000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100253696 unmapped: 26722304 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x1380294/0x1448000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1272627 data_alloc: 234881024 data_used: 10821632
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x1380294/0x1448000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1272627 data_alloc: 234881024 data_used: 10821632
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x1380294/0x1448000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.155124664s of 19.309776306s, submitted: 20
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107364352 unmapped: 19611648 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299469 data_alloc: 234881024 data_used: 11239424
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 17702912 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 17702912 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ea1000/0x0/0x4ffc00000, data 0x16ed294/0x17b5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 17670144 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 17670144 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 17670144 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1311047 data_alloc: 234881024 data_used: 11096064
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 17670144 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e96000/0x0/0x4ffc00000, data 0x170e294/0x17d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 18505728 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 18505728 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 18497536 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e96000/0x0/0x4ffc00000, data 0x170e294/0x17d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 18497536 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303199 data_alloc: 234881024 data_used: 11096064
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 18497536 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e96000/0x0/0x4ffc00000, data 0x170e294/0x17d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 18497536 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.776124001s of 13.241639137s, submitted: 70
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 18497536 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 18448384 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 18448384 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303147 data_alloc: 234881024 data_used: 11096064
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e90000/0x0/0x4ffc00000, data 0x1714294/0x17dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18440192 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e90000/0x0/0x4ffc00000, data 0x1714294/0x17dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18440192 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18440192 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e90000/0x0/0x4ffc00000, data 0x1714294/0x17dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 18432000 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 18423808 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303235 data_alloc: 234881024 data_used: 11096064
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 18423808 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e8d000/0x0/0x4ffc00000, data 0x1717294/0x17df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 18423808 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 18423808 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e8d000/0x0/0x4ffc00000, data 0x1717294/0x17df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 18423808 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e8d000/0x0/0x4ffc00000, data 0x1717294/0x17df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 18423808 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.900504112s of 12.918242455s, submitted: 5
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304083 data_alloc: 234881024 data_used: 11104256
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 18309120 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 18309120 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e82000/0x0/0x4ffc00000, data 0x1722294/0x17ea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c5c8f00
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2cc785a0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 18309120 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2a996000
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128836 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128836 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128836 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128836 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128836 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128836 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2da1f860
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2000 session 0x559f2d636f00
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2d5ee3c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2cc5ed20
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.925148010s of 34.002922058s, submitted: 29
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2a9925a0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d82e3c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d82fe00
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c000 session 0x559f2c5c9860
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c400 session 0x559f2b2d8000
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101851136 unmapped: 25124864 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193305 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa497000/0x0/0x4ffc00000, data 0x110d2e6/0x11d5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101851136 unmapped: 25124864 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101851136 unmapped: 25124864 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101851136 unmapped: 25124864 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101851136 unmapped: 25124864 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101851136 unmapped: 25124864 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195599 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c400 session 0x559f2a958960
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101908480 unmapped: 25067520 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa497000/0x0/0x4ffc00000, data 0x110d2e6/0x11d5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101916672 unmapped: 25059328 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103686144 unmapped: 23289856 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244379 data_alloc: 218103808 data_used: 7331840
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa496000/0x0/0x4ffc00000, data 0x110d309/0x11d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa496000/0x0/0x4ffc00000, data 0x110d309/0x11d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244379 data_alloc: 218103808 data_used: 7331840
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.476533890s of 18.995376587s, submitted: 43
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 20316160 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa178000/0x0/0x4ffc00000, data 0x142b309/0x14f4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108077056 unmapped: 18898944 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa100000/0x0/0x4ffc00000, data 0x14a3309/0x156c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109371392 unmapped: 17604608 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa100000/0x0/0x4ffc00000, data 0x14a3309/0x156c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282047 data_alloc: 218103808 data_used: 8519680
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 17530880 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f1000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 17530880 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f1000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 17530880 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 17522688 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f1000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 17522688 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282047 data_alloc: 218103808 data_used: 8519680
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 17522688 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 17522688 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 17522688 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 17522688 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f1000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282047 data_alloc: 218103808 data_used: 8519680
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f1000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282047 data_alloc: 218103808 data_used: 8519680
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f1000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2dbe81e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c000 session 0x559f2c424b40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c800 session 0x559f2c5df860
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76cc00 session 0x559f2c8b1e00
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.359991074s of 18.809175491s, submitted: 62
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76cc00 session 0x559f2c8b1c20
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2a9a2960
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c000 session 0x559f2d5ee000
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c400 session 0x559f2d5ee780
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c800 session 0x559f2d5eeb40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 22970368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 22970368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 22962176 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1d0f319/0x1dd9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 22962176 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1344055 data_alloc: 218103808 data_used: 8523776
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 22962176 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 22962176 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2c5da1e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 22962176 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 22773760 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1d0f319/0x1dd9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 16949248 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403516 data_alloc: 234881024 data_used: 15618048
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 16949248 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1d0f319/0x1dd9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16932864 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1d0f319/0x1dd9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16932864 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16932864 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16932864 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.809376717s of 14.030103683s, submitted: 19
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403852 data_alloc: 234881024 data_used: 15618048
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16932864 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 16924672 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1d0f319/0x1dd9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 16924672 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 16924672 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 115367936 unmapped: 15810560 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1431280 data_alloc: 234881024 data_used: 16175104
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 13950976 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118071296 unmapped: 13107200 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9519000/0x0/0x4ffc00000, data 0x2081319/0x214b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 13074432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 13074432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 13074432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439394 data_alloc: 234881024 data_used: 16089088
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 13074432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 13074432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9519000/0x0/0x4ffc00000, data 0x2081319/0x214b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118145024 unmapped: 13033472 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118145024 unmapped: 13033472 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9519000/0x0/0x4ffc00000, data 0x2081319/0x214b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118145024 unmapped: 13033472 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c000 session 0x559f2a866b40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.375069618s of 14.576653481s, submitted: 66
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c400 session 0x559f2d8d0000
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286810 data_alloc: 218103808 data_used: 6938624
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d800 session 0x559f2dbe94a0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 19349504 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 19349504 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e8400 session 0x559f2c8ae780
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9000 session 0x559f2a997c20
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 19349504 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f2000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 19349504 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d9605a0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2c8b1a40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 19349504 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150044 data_alloc: 218103808 data_used: 184320
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2b6512c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa91a000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148764 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa91a000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.605167389s of 13.440299034s, submitted: 69
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa91a000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa91a000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148896 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151336 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151336 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.465369225s of 14.476176262s, submitted: 3
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151204 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151204 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d800 session 0x559f2cc5e000
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76dc00 session 0x559f2d82e780
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76dc00 session 0x559f2d0534a0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c8afc20
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d8d0d20
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2cbf7680
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa77b000/0x0/0x4ffc00000, data 0xe2a2d6/0xef1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190883 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d400 session 0x559f2d8d05a0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 2666 syncs, 4.09 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1892 writes, 5856 keys, 1892 commit groups, 1.0 writes per commit group, ingest: 6.53 MB, 0.01 MB/s#012Interval WAL: 1892 writes, 779 syncs, 2.43 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa77b000/0x0/0x4ffc00000, data 0xe2a2d6/0xef1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d400 session 0x559f2cadd2c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c59a000 session 0x559f2cc5fc20
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.399578094s of 17.499835968s, submitted: 27
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c59bc00 session 0x559f2d637680
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192697 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 24371200 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 24371200 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa77a000/0x0/0x4ffc00000, data 0xe2a2e6/0xef2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108134400 unmapped: 23044096 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa77a000/0x0/0x4ffc00000, data 0xe2a2e6/0xef2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 23011328 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 23011328 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228417 data_alloc: 218103808 data_used: 5488640
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 23011328 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 23011328 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 23003136 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa77a000/0x0/0x4ffc00000, data 0xe2a2e6/0xef2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 22970368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 22970368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228417 data_alloc: 218103808 data_used: 5488640
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 22970368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 22970368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.437581062s of 12.444223404s, submitted: 1
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21872640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109371392 unmapped: 21807104 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4e0000/0x0/0x4ffc00000, data 0x10be2e6/0x1186000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 21282816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256751 data_alloc: 218103808 data_used: 5910528
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 21282816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 21282816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 21282816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 21282816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 21282816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256751 data_alloc: 218103808 data_used: 5910528
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 21274624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 21274624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 21274624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 21274624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256751 data_alloc: 218103808 data_used: 5910528
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256751 data_alloc: 218103808 data_used: 5910528
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256751 data_alloc: 218103808 data_used: 5910528
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2a9703c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d800 session 0x559f2c36ba40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c59a000 session 0x559f2c36a1e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c59bc00 session 0x559f2cc5ed20
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.291732788s of 27.440547943s, submitted: 53
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109658112 unmapped: 21520384 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2dbe92c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283893 data_alloc: 218103808 data_used: 5914624
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1a1000/0x0/0x4ffc00000, data 0x14032e6/0x14cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1a1000/0x0/0x4ffc00000, data 0x14032e6/0x14cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1a1000/0x0/0x4ffc00000, data 0x14032e6/0x14cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284061 data_alloc: 218103808 data_used: 5914624
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1a1000/0x0/0x4ffc00000, data 0x14032e6/0x14cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305797 data_alloc: 218103808 data_used: 9158656
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1a1000/0x0/0x4ffc00000, data 0x14032e6/0x14cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1a1000/0x0/0x4ffc00000, data 0x14032e6/0x14cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305797 data_alloc: 218103808 data_used: 9158656
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.994756699s of 18.046251297s, submitted: 9
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 18489344 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 16203776 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 16433152 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f973e000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392039 data_alloc: 234881024 data_used: 9400320
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 16433152 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 16433152 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f973e000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16424960 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16424960 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16424960 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392055 data_alloc: 234881024 data_used: 9400320
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16424960 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16424960 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16424960 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f973e000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 16400384 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f973e000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 16400384 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392055 data_alloc: 234881024 data_used: 9400320
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.862829208s of 13.072974205s, submitted: 92
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113451008 unmapped: 17727488 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b646c00 session 0x559f2c5fc5a0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b647c00 session 0x559f2b6505a0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 17719296 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9754000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 17719296 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37e000 session 0x559f2d953860
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 17719296 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 17719296 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382143 data_alloc: 234881024 data_used: 9400320
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 17719296 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9754000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 17719296 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113467392 unmapped: 17711104 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113467392 unmapped: 17711104 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9754000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113483776 unmapped: 17694720 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381879 data_alloc: 234881024 data_used: 9400320
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.620989799s of 10.001231194s, submitted: 134
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 17547264 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 17375232 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 17375232 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 17375232 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 17367040 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382143 data_alloc: 234881024 data_used: 9400320
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 17367040 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 17367040 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 17358848 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 17358848 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 17358848 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382143 data_alloc: 234881024 data_used: 9400320
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 17358848 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.531607628s of 10.991118431s, submitted: 201
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 17358848 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 17350656 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 17350656 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 17350656 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382143 data_alloc: 234881024 data_used: 9400320
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 17350656 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382143 data_alloc: 234881024 data_used: 9400320
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 17334272 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 17334272 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.583388329s of 13.592965126s, submitted: 2
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384159 data_alloc: 234881024 data_used: 9388032
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 17211392 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 17211392 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 17211392 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 17203200 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113983488 unmapped: 17195008 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384663 data_alloc: 234881024 data_used: 9388032
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113983488 unmapped: 17195008 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113983488 unmapped: 17195008 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d400 session 0x559f2d637860
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 18366464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37e000 session 0x559f2b2d8b40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0c8000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0c8000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259535 data_alloc: 218103808 data_used: 5898240
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37fc00 session 0x559f2d052b40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.979496956s of 13.032286644s, submitted: 26
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259703 data_alloc: 218103808 data_used: 5898240
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0c8000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0c8000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d5321e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c6481e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c59bc00 session 0x559f2dbe8960
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa486000/0x0/0x4ffc00000, data 0x914284/0x9db000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166102 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166102 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2da1c3c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37e800 session 0x559f2dbe9680
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166102 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166102 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.247751236s of 26.306289673s, submitted: 19
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166234 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166234 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165942 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.318322182s of 13.376296997s, submitted: 2
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 20733952 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c5df4a0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 22888448 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa672000/0x0/0x4ffc00000, data 0xb24274/0xbea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 22888448 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180354 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 22888448 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 22888448 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2a999e00
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 22888448 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2c8b14a0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 22888448 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f29d55c20
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa672000/0x0/0x4ffc00000, data 0xb24274/0xbea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d400 session 0x559f2c5c9860
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180354 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa672000/0x0/0x4ffc00000, data 0xb24274/0xbea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2cadc000
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.305717468s of 12.769754410s, submitted: 2
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166826 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2cc5e000
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166826 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166826 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 23879680 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 23879680 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 23879680 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 23879680 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.356574059s of 13.715682030s, submitted: 3
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2a971a40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2dd0ad20
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 30638080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237939 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 30638080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ee3000/0x0/0x4ffc00000, data 0x12b22d6/0x1379000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 30638080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 30638080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 30638080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d400 session 0x559f2cc5eb40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 30334976 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239823 data_alloc: 218103808 data_used: 184320
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 30334976 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ebf000/0x0/0x4ffc00000, data 0x12d62d6/0x139d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 30171136 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 27516928 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 27516928 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: mgrc ms_handle_reset ms_handle_reset con 0x559f2d0e8c00
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3802415056
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3802415056,v1:192.168.122.100:6801/3802415056]
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: mgrc handle_mgr_configure stats_period=5
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 27516928 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1300015 data_alloc: 218103808 data_used: 9142272
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 27516928 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ebf000/0x0/0x4ffc00000, data 0x12d62d6/0x139d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 27516928 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c36a3c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d82fa40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.177964211s of 13.560062408s, submitted: 29
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110714880 unmapped: 27820032 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2c5fc5a0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 31408128 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 31408128 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 31408128 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 31408128 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 31408128 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 36.629310608s of 38.173881531s, submitted: 16
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2c8b03c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4a7000/0x0/0x4ffc00000, data 0xcef274/0xdb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205109 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4a7000/0x0/0x4ffc00000, data 0xcef274/0xdb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76dc00 session 0x559f2c8ae780
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d636960
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2a866000
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2a955680
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205109 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107323392 unmapped: 31211520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4a7000/0x0/0x4ffc00000, data 0xcef274/0xdb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4a7000/0x0/0x4ffc00000, data 0xcef274/0xdb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232013 data_alloc: 218103808 data_used: 4112384
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4a7000/0x0/0x4ffc00000, data 0xcef274/0xdb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232013 data_alloc: 218103808 data_used: 4112384
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.716075897s of 20.768712997s, submitted: 10
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 22183936 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4a7000/0x0/0x4ffc00000, data 0xcef274/0xdb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114319360 unmapped: 24215552 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 24788992 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9bff000/0x0/0x4ffc00000, data 0x158f274/0x1655000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 24788992 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9bf5000/0x0/0x4ffc00000, data 0x1599274/0x165f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304949 data_alloc: 218103808 data_used: 5197824
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 24788992 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 24788992 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 24788992 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 24788992 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9bf5000/0x0/0x4ffc00000, data 0x1599274/0x165f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9bf5000/0x0/0x4ffc00000, data 0x1599274/0x165f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 24772608 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305101 data_alloc: 218103808 data_used: 5201920
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 24772608 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9bf5000/0x0/0x4ffc00000, data 0x1599274/0x165f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 24772608 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 24772608 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 24772608 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 24772608 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2a9774a0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305101 data_alloc: 218103808 data_used: 5201920
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.963165283s of 14.391463280s, submitted: 83
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2d8d10e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178119 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178119 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178119 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178119 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.524868011s of 22.775295258s, submitted: 9
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2d053a40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa46b000/0x0/0x4ffc00000, data 0xd2b274/0xdf1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111312896 unmapped: 27222016 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa46b000/0x0/0x4ffc00000, data 0xd2b274/0xdf1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111312896 unmapped: 27222016 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251947 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f17000/0x0/0x4ffc00000, data 0x127f274/0x1345000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 28434432 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f17000/0x0/0x4ffc00000, data 0x127f274/0x1345000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f17000/0x0/0x4ffc00000, data 0x127f274/0x1345000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 28434432 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 28434432 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 28434432 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 28434432 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251947 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 28434432 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d82f0e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 28286976 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f17000/0x0/0x4ffc00000, data 0x127f274/0x1345000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 28286976 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x12a3297/0x136a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 28286976 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 25640960 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314620 data_alloc: 218103808 data_used: 8757248
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 25640960 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 25640960 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 25640960 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x12a3297/0x136a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 25640960 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 25640960 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314620 data_alloc: 218103808 data_used: 8757248
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 25632768 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 25632768 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 25632768 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x12a3297/0x136a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 25600000 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.691644669s of 20.797815323s, submitted: 21
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9866000/0x0/0x4ffc00000, data 0x1927297/0x19ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2caddc20
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120578048 unmapped: 17956864 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409342 data_alloc: 234881024 data_used: 10747904
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118767616 unmapped: 19767296 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118767616 unmapped: 19767296 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118767616 unmapped: 19767296 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b0c00 session 0x559f2a954b40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118784000 unmapped: 19750912 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d6ca800 session 0x559f2da1f860
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95d6000/0x0/0x4ffc00000, data 0x1bbe297/0x1c85000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2c8b10e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118800384 unmapped: 19734528 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d5efe00
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410709 data_alloc: 234881024 data_used: 10760192
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118808576 unmapped: 19726336 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 19537920 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 18055168 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 18055168 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95d6000/0x0/0x4ffc00000, data 0x1bbe2a7/0x1c86000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 18055168 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426209 data_alloc: 234881024 data_used: 12935168
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 18055168 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95d6000/0x0/0x4ffc00000, data 0x1bbe2a7/0x1c86000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 18055168 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95d6000/0x0/0x4ffc00000, data 0x1bbe2a7/0x1c86000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 18022400 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 18022400 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 18022400 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426209 data_alloc: 234881024 data_used: 12935168
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 18022400 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 17989632 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.309732437s of 18.578636169s, submitted: 92
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 14458880 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e3f000/0x0/0x4ffc00000, data 0x234f2a7/0x2417000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 13942784 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1498987 data_alloc: 234881024 data_used: 13889536
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e23000/0x0/0x4ffc00000, data 0x23682a7/0x2430000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e23000/0x0/0x4ffc00000, data 0x23682a7/0x2430000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1498987 data_alloc: 234881024 data_used: 13889536
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124747776 unmapped: 13787136 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124747776 unmapped: 13787136 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e23000/0x0/0x4ffc00000, data 0x23682a7/0x2430000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124747776 unmapped: 13787136 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1500203 data_alloc: 234881024 data_used: 13967360
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.244200706s of 13.420284271s, submitted: 77
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124747776 unmapped: 13787136 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124747776 unmapped: 13787136 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b0c00 session 0x559f2a976b40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2d5321e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 14450688 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e2c000/0x0/0x4ffc00000, data 0x23682a7/0x2430000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,1])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d608400 session 0x559f2d8d0d20
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 14450688 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 14450688 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387580 data_alloc: 234881024 data_used: 10768384
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 14450688 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 14450688 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2b6505a0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2d8d0f00
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d608400 session 0x559f2c6481e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f98fd000/0x0/0x4ffc00000, data 0x1898297/0x195f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197050 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa85d000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197050 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa85d000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197050 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa85d000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197050 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa85d000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa85d000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2dab5680
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2b2d90e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2d636780
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d5eeb40
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.304420471s of 28.535713196s, submitted: 47
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2a958f00
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d608400 session 0x559f2a9990e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b0c00 session 0x559f2cbf61e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2d8d1c20
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2a9961e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 21422080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215353 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 21422080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 21422080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 21422080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2d9530e0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d608400 session 0x559f2d952960
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215353 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2d953680
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2d952d20
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219133 data_alloc: 218103808 data_used: 704512
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219133 data_alloc: 218103808 data_used: 704512
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 21397504 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 21397504 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.240032196s of 18.298688889s, submitted: 18
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 19447808 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa113000/0x0/0x4ffc00000, data 0x1082284/0x1149000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266339 data_alloc: 218103808 data_used: 815104
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa10a000/0x0/0x4ffc00000, data 0x108a284/0x1151000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266339 data_alloc: 218103808 data_used: 815104
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa10a000/0x0/0x4ffc00000, data 0x108a284/0x1151000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa10a000/0x0/0x4ffc00000, data 0x108a284/0x1151000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa10a000/0x0/0x4ffc00000, data 0x108a284/0x1151000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.515064240s of 12.640249252s, submitted: 32
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d5ee960
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265651 data_alloc: 218103808 data_used: 815104
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2d6372c0
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 20701184 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: do_command 'config diff' '{prefix=config diff}'
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: do_command 'config show' '{prefix=config show}'
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: do_command 'counter dump' '{prefix=counter dump}'
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: do_command 'counter schema' '{prefix=counter schema}'
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 21078016 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117882880 unmapped: 20652032 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:22:35 np0005475493 ceph-osd[81751]: do_command 'log dump' '{prefix=log dump}'
Oct  8 06:22:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:22:35] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:22:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:22:35] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:22:35 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26411 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct  8 06:22:35 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1471155663' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  8 06:22:35 np0005475493 nova_compute[262220]: 2025-10-08 10:22:35.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:22:35 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26176 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:35 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26423 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:36 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26432 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  8 06:22:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1274900231' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  8 06:22:36 np0005475493 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  8 06:22:36 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26197 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:36.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:36 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16527 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:36 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26447 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct  8 06:22:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2389033116' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  8 06:22:36 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26218 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:36 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16551 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:37 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26468 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Oct  8 06:22:37 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3592158987' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct  8 06:22:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:37.208Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:22:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:37.209Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:22:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:22:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:37.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:22:37 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16566 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:37 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26242 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:37 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26483 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:37 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1142: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:22:37 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16581 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:37 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26263 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:37 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26266 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:38 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16602 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Oct  8 06:22:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1579170724' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct  8 06:22:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Oct  8 06:22:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3820414779' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct  8 06:22:38 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26287 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:38 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26528 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:22:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:38.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:22:38 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16614 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Oct  8 06:22:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2159410719' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct  8 06:22:38 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26293 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:38 np0005475493 nova_compute[262220]: 2025-10-08 10:22:38.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:22:38 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26543 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:38 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16626 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:38.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:22:38 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26314 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:22:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:22:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:22:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:22:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Oct  8 06:22:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/93720885' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct  8 06:22:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:39.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Oct  8 06:22:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4096862975' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct  8 06:22:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:22:39 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26332 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:39 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1143: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:22:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Oct  8 06:22:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/323351107' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct  8 06:22:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Oct  8 06:22:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3237813266' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct  8 06:22:39 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26344 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Oct  8 06:22:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1678632811' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct  8 06:22:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Oct  8 06:22:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2836056212' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct  8 06:22:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:40.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Oct  8 06:22:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3893590644' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct  8 06:22:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Oct  8 06:22:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/375156075' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct  8 06:22:40 np0005475493 nova_compute[262220]: 2025-10-08 10:22:40.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:22:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct  8 06:22:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/769709891' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  8 06:22:41 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Oct  8 06:22:41 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4012908329' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct  8 06:22:41 np0005475493 systemd[1]: Starting Hostname Service...
Oct  8 06:22:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:41.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:41 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Oct  8 06:22:41 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/14823960' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct  8 06:22:41 np0005475493 systemd[1]: Started Hostname Service.
Oct  8 06:22:41 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1144: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:22:41 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Oct  8 06:22:41 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/722090368' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct  8 06:22:41 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16734 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:41 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26428 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Oct  8 06:22:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/227637344' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct  8 06:22:42 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26711 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:42 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16758 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:42 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26717 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:42.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:42 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16764 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:42 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26726 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:42 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16782 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:42 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26744 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:42 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26476 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Oct  8 06:22:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3205486850' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct  8 06:22:43 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16800 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:43.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:43 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26762 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:43 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26497 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:43 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1145: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:22:43 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26506 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:43 np0005475493 nova_compute[262220]: 2025-10-08 10:22:43.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:22:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Oct  8 06:22:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3543490241' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct  8 06:22:43 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16812 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:43 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26783 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:43 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26524 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:22:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:22:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:22:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:22:44 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16827 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Oct  8 06:22:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3027080278' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct  8 06:22:44 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26801 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:44.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:44 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26542 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:22:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Oct  8 06:22:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3274130406' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct  8 06:22:44 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16839 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  8 06:22:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  8 06:22:44 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26819 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  8 06:22:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  8 06:22:44 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26566 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:44 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16875 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:44 np0005475493 podman[289083]: 2025-10-08 10:22:44.91170518 +0000 UTC m=+0.062690103 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_managed=true, container_name=iscsid)
Oct  8 06:22:45 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26602 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:22:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:45.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:22:45 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Oct  8 06:22:45 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2585883534' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct  8 06:22:45 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1146: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:22:45 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26620 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:45 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16908 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:22:45] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:22:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:22:45] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:22:45 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  8 06:22:45 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  8 06:22:45 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26882 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:45 np0005475493 nova_compute[262220]: 2025-10-08 10:22:45.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:22:46 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26635 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Oct  8 06:22:46 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1797094569' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct  8 06:22:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:46.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Oct  8 06:22:46 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/573220540' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct  8 06:22:46 np0005475493 nova_compute[262220]: 2025-10-08 10:22:46.882 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:22:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Oct  8 06:22:46 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1231813826' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct  8 06:22:46 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26674 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:47.210Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:22:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:47.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:47 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1147: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:22:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:22:47
Oct  8 06:22:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:22:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:22:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', '.nfs', '.rgw.root', 'backups', 'volumes', 'vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta']
Oct  8 06:22:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:22:47 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16956 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:22:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:22:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:22:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26951 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:22:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Oct  8 06:22:48 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2053191720' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:22:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:22:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:48.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:48 np0005475493 nova_compute[262220]: 2025-10-08 10:22:48.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:22:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Oct  8 06:22:48 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2804644638' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct  8 06:22:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:48.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:22:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:48.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:22:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:22:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:22:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:22:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:22:49 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16977 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:49.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:49 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26719 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:22:49 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1148: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:22:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Oct  8 06:22:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3418491503' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct  8 06:22:49 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26984 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:49 np0005475493 nova_compute[262220]: 2025-10-08 10:22:49.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:22:49 np0005475493 nova_compute[262220]: 2025-10-08 10:22:49.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:22:49 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.16998 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:50 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17004 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:22:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:50.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:22:50 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27005 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:50 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Oct  8 06:22:50 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3873220482' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct  8 06:22:50 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26752 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:50 np0005475493 nova_compute[262220]: 2025-10-08 10:22:50.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:22:50 np0005475493 nova_compute[262220]: 2025-10-08 10:22:50.885 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:22:50 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27014 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Oct  8 06:22:51 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1876032476' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct  8 06:22:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:51.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17028 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1149: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:22:51 np0005475493 ovs-appctl[290574]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct  8 06:22:51 np0005475493 ovs-appctl[290581]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct  8 06:22:51 np0005475493 ovs-appctl[290588]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26773 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17037 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:51 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:22:51 np0005475493 nova_compute[262220]: 2025-10-08 10:22:51.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:22:51 np0005475493 nova_compute[262220]: 2025-10-08 10:22:51.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  8 06:22:51 np0005475493 nova_compute[262220]: 2025-10-08 10:22:51.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  8 06:22:51 np0005475493 nova_compute[262220]: 2025-10-08 10:22:51.909 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26779 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27038 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:52.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Oct  8 06:22:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4239875668' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27047 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:52 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:22:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Oct  8 06:22:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1916664575' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct  8 06:22:52 np0005475493 nova_compute[262220]: 2025-10-08 10:22:52.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:22:52 np0005475493 nova_compute[262220]: 2025-10-08 10:22:52.937 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:22:52 np0005475493 nova_compute[262220]: 2025-10-08 10:22:52.938 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:22:52 np0005475493 nova_compute[262220]: 2025-10-08 10:22:52.938 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:22:52 np0005475493 nova_compute[262220]: 2025-10-08 10:22:52.938 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:22:52 np0005475493 nova_compute[262220]: 2025-10-08 10:22:52.938 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17061 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:53.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26812 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:22:53 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/362724180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:22:53 np0005475493 nova_compute[262220]: 2025-10-08 10:22:53.422 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1150: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:22:53 np0005475493 nova_compute[262220]: 2025-10-08 10:22:53.573 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:22:53 np0005475493 nova_compute[262220]: 2025-10-08 10:22:53.575 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4338MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:22:53 np0005475493 nova_compute[262220]: 2025-10-08 10:22:53.575 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:22:53 np0005475493 nova_compute[262220]: 2025-10-08 10:22:53.576 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:22:53 np0005475493 nova_compute[262220]: 2025-10-08 10:22:53.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17079 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:53 np0005475493 nova_compute[262220]: 2025-10-08 10:22:53.647 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:22:53 np0005475493 nova_compute[262220]: 2025-10-08 10:22:53.648 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27074 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:53 np0005475493 nova_compute[262220]: 2025-10-08 10:22:53.672 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26824 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:22:53 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:22:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:22:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:22:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:22:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:22:54 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27089 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:22:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/86879457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:22:54 np0005475493 nova_compute[262220]: 2025-10-08 10:22:54.146 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:22:54 np0005475493 nova_compute[262220]: 2025-10-08 10:22:54.151 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:22:54 np0005475493 nova_compute[262220]: 2025-10-08 10:22:54.166 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:22:54 np0005475493 nova_compute[262220]: 2025-10-08 10:22:54.167 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:22:54 np0005475493 nova_compute[262220]: 2025-10-08 10:22:54.167 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:22:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:54.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:22:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Oct  8 06:22:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3052212597' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct  8 06:22:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Oct  8 06:22:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2993112683' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct  8 06:22:55 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26857 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:55 np0005475493 nova_compute[262220]: 2025-10-08 10:22:55.163 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:22:55 np0005475493 nova_compute[262220]: 2025-10-08 10:22:55.164 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:22:55 np0005475493 nova_compute[262220]: 2025-10-08 10:22:55.164 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:22:55 np0005475493 nova_compute[262220]: 2025-10-08 10:22:55.164 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  8 06:22:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:55.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:55 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17124 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:55 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26863 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:22:55 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1151: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:22:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:22:55] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct  8 06:22:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:22:55] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct  8 06:22:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Oct  8 06:22:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3757659921' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct  8 06:22:55 np0005475493 nova_compute[262220]: 2025-10-08 10:22:55.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:22:55 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27131 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:56 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Oct  8 06:22:56 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1108141833' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct  8 06:22:56 np0005475493 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  8 06:22:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:56.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:56 np0005475493 podman[292132]: 2025-10-08 10:22:56.439229307 +0000 UTC m=+0.133758554 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller)
Oct  8 06:22:56 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Oct  8 06:22:56 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4000945604' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct  8 06:22:57 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Oct  8 06:22:57 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1452857958' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct  8 06:22:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:57.211Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:22:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:22:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:57.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:22:57 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26911 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:22:57.422 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:22:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:22:57.422 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:22:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:22:57.422 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:22:57 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1152: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:22:57 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17163 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Oct  8 06:22:58 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2637272810' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct  8 06:22:58 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27185 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:22:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:22:58.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:22:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Oct  8 06:22:58 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2644108507' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct  8 06:22:58 np0005475493 nova_compute[262220]: 2025-10-08 10:22:58.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:22:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:58.861Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:22:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:22:58.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:22:58 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17184 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:22:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:22:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:22:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:22:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:22:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:22:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:22:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:22:59.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:22:59 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27212 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Oct  8 06:22:59 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2979068801' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct  8 06:22:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:22:59 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1153: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:22:59 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26944 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:59 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17199 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:22:59 np0005475493 nova_compute[262220]: 2025-10-08 10:22:59.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:23:00 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27236 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:23:00 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17205 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:23:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:00.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:00 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27245 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:23:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Oct  8 06:23:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/874241760' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct  8 06:23:00 np0005475493 podman[292442]: 2025-10-08 10:23:00.55336167 +0000 UTC m=+0.072995886 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd)
Oct  8 06:23:00 np0005475493 podman[292444]: 2025-10-08 10:23:00.580100315 +0000 UTC m=+0.099857615 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  8 06:23:00 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26974 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:23:00 np0005475493 nova_compute[262220]: 2025-10-08 10:23:00.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:23:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Oct  8 06:23:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3967471108' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct  8 06:23:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:01.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27266 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1154: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.26995 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27278 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17229 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27284 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:23:01 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27004 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:23:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Oct  8 06:23:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2341215762' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct  8 06:23:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:23:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:02.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:23:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Oct  8 06:23:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3460873467' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct  8 06:23:02 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17253 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:23:02 np0005475493 virtqemud[261885]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  8 06:23:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:23:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27022 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27311 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17262 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:23:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:03.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:03 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  8 06:23:03 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 7382 writes, 32K keys, 7382 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 7382 writes, 7382 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1600 writes, 7120 keys, 1600 commit groups, 1.0 writes per commit group, ingest: 11.92 MB, 0.02 MB/s#012Interval WAL: 1600 writes, 1600 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     91.5      0.56              0.13        18    0.031       0      0       0.0       0.0#012  L6      1/0   13.34 MB   0.0      0.3     0.0      0.2       0.2      0.0       0.0   4.3    135.2    115.4      1.90              0.51        17    0.112     93K   9474       0.0       0.0#012 Sum      1/0   13.34 MB   0.0      0.3     0.0      0.2       0.3      0.1       0.0   5.3    104.5    110.0      2.46              0.65        35    0.070     93K   9474       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.9    103.0    105.5      0.62              0.19         8    0.078     26K   2564       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.0      0.2       0.2      0.0       0.0   0.0    135.2    115.4      1.90              0.51        17    0.112     93K   9474       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     92.0      0.55              0.13        17    0.033       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     16.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.050, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.26 GB write, 0.11 MB/s write, 0.25 GB read, 0.11 MB/s read, 2.5 seconds#012Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f7a1ce3350#2 capacity: 304.00 MB usage: 24.40 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000192 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1488,23.66 MB,7.78308%) FilterBlock(36,274.42 KB,0.0881546%) IndexBlock(36,482.33 KB,0.154942%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1155: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27028 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:23:03 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27323 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:23:03 np0005475493 nova_compute[262220]: 2025-10-08 10:23:03.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:23:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct  8 06:23:03 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1907194708' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  8 06:23:03 np0005475493 systemd[1]: Starting Time & Date Service...
Oct  8 06:23:03 np0005475493 systemd[1]: Started Time & Date Service.
Oct  8 06:23:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:23:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:23:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:23:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:23:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Oct  8 06:23:04 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3508599741' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct  8 06:23:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:04.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:23:04 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27055 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:23:05 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27061 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:23:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:05.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:05 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1156: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:23:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:23:05] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct  8 06:23:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:23:05] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct  8 06:23:05 np0005475493 nova_compute[262220]: 2025-10-08 10:23:05.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:23:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:23:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:06.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:23:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:07.212Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:23:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:07.212Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:23:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:07.212Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:23:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:23:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:07.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:23:07 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1157: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:23:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:08.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:08 np0005475493 nova_compute[262220]: 2025-10-08 10:23:08.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:23:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:08.864Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:23:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:08.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:23:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:23:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:23:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:23:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:23:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.002000065s ======
Oct  8 06:23:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:09.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000065s
Oct  8 06:23:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:23:09 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1158: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:23:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:10.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:10 np0005475493 nova_compute[262220]: 2025-10-08 10:23:10.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:23:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:11.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:11 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1159: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:23:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:12.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:23:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:13.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:23:13 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1160: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:23:13 np0005475493 nova_compute[262220]: 2025-10-08 10:23:13.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:23:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:23:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:23:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:23:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:23:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:23:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:14.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:23:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:23:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:15.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:15 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1161: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:23:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:23:15] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct  8 06:23:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:23:15] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct  8 06:23:15 np0005475493 podman[293175]: 2025-10-08 10:23:15.912816196 +0000 UTC m=+0.068464948 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid)
Oct  8 06:23:15 np0005475493 nova_compute[262220]: 2025-10-08 10:23:15.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:23:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:16.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:17.213Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:23:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:17.217Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:23:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:17.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:17 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1162: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:23:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:23:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:23:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:23:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:23:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:23:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:23:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:23:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:23:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:18.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:18 np0005475493 nova_compute[262220]: 2025-10-08 10:23:18.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:23:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:18.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:23:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:23:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:23:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:23:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:23:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:19.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:23:19 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1163: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:23:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:23:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:23:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:23:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:23:19 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1164: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  8 06:23:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:23:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:23:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:23:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:23:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:23:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:23:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:23:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:23:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:23:19 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:23:20 np0005475493 podman[293370]: 2025-10-08 10:23:20.29058959 +0000 UTC m=+0.042038433 container create d0ce206ff219867aca0c4cd09590e50be930b97446845818cbbb746dbc9f048c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_villani, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:23:20 np0005475493 systemd[1]: Started libpod-conmon-d0ce206ff219867aca0c4cd09590e50be930b97446845818cbbb746dbc9f048c.scope.
Oct  8 06:23:20 np0005475493 podman[293370]: 2025-10-08 10:23:20.269847337 +0000 UTC m=+0.021296200 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:23:20 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:23:20 np0005475493 podman[293370]: 2025-10-08 10:23:20.388905595 +0000 UTC m=+0.140354438 container init d0ce206ff219867aca0c4cd09590e50be930b97446845818cbbb746dbc9f048c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  8 06:23:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:20.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:20 np0005475493 podman[293370]: 2025-10-08 10:23:20.397605746 +0000 UTC m=+0.149054579 container start d0ce206ff219867aca0c4cd09590e50be930b97446845818cbbb746dbc9f048c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_villani, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:23:20 np0005475493 happy_villani[293386]: 167 167
Oct  8 06:23:20 np0005475493 systemd[1]: libpod-d0ce206ff219867aca0c4cd09590e50be930b97446845818cbbb746dbc9f048c.scope: Deactivated successfully.
Oct  8 06:23:20 np0005475493 conmon[293386]: conmon d0ce206ff219867aca0c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d0ce206ff219867aca0c4cd09590e50be930b97446845818cbbb746dbc9f048c.scope/container/memory.events
Oct  8 06:23:20 np0005475493 podman[293370]: 2025-10-08 10:23:20.404049895 +0000 UTC m=+0.155498758 container attach d0ce206ff219867aca0c4cd09590e50be930b97446845818cbbb746dbc9f048c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_villani, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  8 06:23:20 np0005475493 podman[293370]: 2025-10-08 10:23:20.405671758 +0000 UTC m=+0.157120601 container died d0ce206ff219867aca0c4cd09590e50be930b97446845818cbbb746dbc9f048c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_villani, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  8 06:23:20 np0005475493 systemd[1]: var-lib-containers-storage-overlay-17eda232fc737d9e66aa5234a02c2217a8defa46d7ddaa4f7c12a2addf028196-merged.mount: Deactivated successfully.
Oct  8 06:23:20 np0005475493 podman[293370]: 2025-10-08 10:23:20.451370788 +0000 UTC m=+0.202819631 container remove d0ce206ff219867aca0c4cd09590e50be930b97446845818cbbb746dbc9f048c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  8 06:23:20 np0005475493 systemd[1]: libpod-conmon-d0ce206ff219867aca0c4cd09590e50be930b97446845818cbbb746dbc9f048c.scope: Deactivated successfully.
Oct  8 06:23:20 np0005475493 podman[293411]: 2025-10-08 10:23:20.59405349 +0000 UTC m=+0.023647896 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:23:20 np0005475493 podman[293411]: 2025-10-08 10:23:20.693685649 +0000 UTC m=+0.123280055 container create 7226b249e868a4e16d455df2ac73aca60eb06b2d71536d864d59914d35aaf62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rhodes, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct  8 06:23:20 np0005475493 systemd[1]: Started libpod-conmon-7226b249e868a4e16d455df2ac73aca60eb06b2d71536d864d59914d35aaf62d.scope.
Oct  8 06:23:20 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:23:20 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256fe7cf2933ac5872b7aadf7d5ed338625ea264337ad5439b90fcb303938334/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:23:20 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256fe7cf2933ac5872b7aadf7d5ed338625ea264337ad5439b90fcb303938334/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:23:20 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256fe7cf2933ac5872b7aadf7d5ed338625ea264337ad5439b90fcb303938334/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:23:20 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256fe7cf2933ac5872b7aadf7d5ed338625ea264337ad5439b90fcb303938334/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:23:20 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256fe7cf2933ac5872b7aadf7d5ed338625ea264337ad5439b90fcb303938334/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:23:20 np0005475493 podman[293411]: 2025-10-08 10:23:20.82552844 +0000 UTC m=+0.255122856 container init 7226b249e868a4e16d455df2ac73aca60eb06b2d71536d864d59914d35aaf62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  8 06:23:20 np0005475493 podman[293411]: 2025-10-08 10:23:20.837715154 +0000 UTC m=+0.267309570 container start 7226b249e868a4e16d455df2ac73aca60eb06b2d71536d864d59914d35aaf62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rhodes, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  8 06:23:20 np0005475493 podman[293411]: 2025-10-08 10:23:20.842271102 +0000 UTC m=+0.271865508 container attach 7226b249e868a4e16d455df2ac73aca60eb06b2d71536d864d59914d35aaf62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rhodes, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  8 06:23:20 np0005475493 nova_compute[262220]: 2025-10-08 10:23:20.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:23:21 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:23:21 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:23:21 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:23:21 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:23:21 np0005475493 amazing_rhodes[293428]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:23:21 np0005475493 amazing_rhodes[293428]: --> All data devices are unavailable
Oct  8 06:23:21 np0005475493 systemd[1]: libpod-7226b249e868a4e16d455df2ac73aca60eb06b2d71536d864d59914d35aaf62d.scope: Deactivated successfully.
Oct  8 06:23:21 np0005475493 podman[293411]: 2025-10-08 10:23:21.210311455 +0000 UTC m=+0.639905931 container died 7226b249e868a4e16d455df2ac73aca60eb06b2d71536d864d59914d35aaf62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rhodes, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Oct  8 06:23:21 np0005475493 systemd[1]: var-lib-containers-storage-overlay-256fe7cf2933ac5872b7aadf7d5ed338625ea264337ad5439b90fcb303938334-merged.mount: Deactivated successfully.
Oct  8 06:23:21 np0005475493 podman[293411]: 2025-10-08 10:23:21.262790435 +0000 UTC m=+0.692384841 container remove 7226b249e868a4e16d455df2ac73aca60eb06b2d71536d864d59914d35aaf62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:23:21 np0005475493 systemd[1]: libpod-conmon-7226b249e868a4e16d455df2ac73aca60eb06b2d71536d864d59914d35aaf62d.scope: Deactivated successfully.
Oct  8 06:23:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:21.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:21 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1165: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  8 06:23:21 np0005475493 podman[293550]: 2025-10-08 10:23:21.808586006 +0000 UTC m=+0.040665038 container create f2feb507a74c2c2d14764dbc12ed956b7927e28774b13e8c78fbe7ae83065ab5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_nash, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:23:21 np0005475493 systemd[1]: Started libpod-conmon-f2feb507a74c2c2d14764dbc12ed956b7927e28774b13e8c78fbe7ae83065ab5.scope.
Oct  8 06:23:21 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:23:21 np0005475493 podman[293550]: 2025-10-08 10:23:21.793233079 +0000 UTC m=+0.025312121 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:23:21 np0005475493 podman[293550]: 2025-10-08 10:23:21.889997324 +0000 UTC m=+0.122076436 container init f2feb507a74c2c2d14764dbc12ed956b7927e28774b13e8c78fbe7ae83065ab5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_nash, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct  8 06:23:21 np0005475493 podman[293550]: 2025-10-08 10:23:21.897162007 +0000 UTC m=+0.129241029 container start f2feb507a74c2c2d14764dbc12ed956b7927e28774b13e8c78fbe7ae83065ab5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_nash, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  8 06:23:21 np0005475493 pedantic_nash[293567]: 167 167
Oct  8 06:23:21 np0005475493 podman[293550]: 2025-10-08 10:23:21.900398762 +0000 UTC m=+0.132477824 container attach f2feb507a74c2c2d14764dbc12ed956b7927e28774b13e8c78fbe7ae83065ab5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_nash, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  8 06:23:21 np0005475493 systemd[1]: libpod-f2feb507a74c2c2d14764dbc12ed956b7927e28774b13e8c78fbe7ae83065ab5.scope: Deactivated successfully.
Oct  8 06:23:21 np0005475493 podman[293550]: 2025-10-08 10:23:21.900975119 +0000 UTC m=+0.133054161 container died f2feb507a74c2c2d14764dbc12ed956b7927e28774b13e8c78fbe7ae83065ab5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:23:21 np0005475493 systemd[1]: var-lib-containers-storage-overlay-d0809961297a027ca39bcbd93f50b5d27685c3f2922143f28763a4560b46ed2e-merged.mount: Deactivated successfully.
Oct  8 06:23:21 np0005475493 podman[293550]: 2025-10-08 10:23:21.940174349 +0000 UTC m=+0.172253381 container remove f2feb507a74c2c2d14764dbc12ed956b7927e28774b13e8c78fbe7ae83065ab5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_nash, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  8 06:23:21 np0005475493 systemd[1]: libpod-conmon-f2feb507a74c2c2d14764dbc12ed956b7927e28774b13e8c78fbe7ae83065ab5.scope: Deactivated successfully.
Oct  8 06:23:22 np0005475493 podman[293593]: 2025-10-08 10:23:22.112977698 +0000 UTC m=+0.045437653 container create 2a7f8be23e6289f253408b45814f2802a7bd51ce2df7b7e32cf2b6cbef235f06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_snyder, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:23:22 np0005475493 systemd[1]: Started libpod-conmon-2a7f8be23e6289f253408b45814f2802a7bd51ce2df7b7e32cf2b6cbef235f06.scope.
Oct  8 06:23:22 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:23:22 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84fae1783fc5ade0e792a82d05e16ffb5f9a2dec5e547529a97870454494780e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:23:22 np0005475493 podman[293593]: 2025-10-08 10:23:22.090890813 +0000 UTC m=+0.023350788 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:23:22 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84fae1783fc5ade0e792a82d05e16ffb5f9a2dec5e547529a97870454494780e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:23:22 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84fae1783fc5ade0e792a82d05e16ffb5f9a2dec5e547529a97870454494780e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:23:22 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84fae1783fc5ade0e792a82d05e16ffb5f9a2dec5e547529a97870454494780e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:23:22 np0005475493 podman[293593]: 2025-10-08 10:23:22.202660434 +0000 UTC m=+0.135120399 container init 2a7f8be23e6289f253408b45814f2802a7bd51ce2df7b7e32cf2b6cbef235f06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:23:22 np0005475493 podman[293593]: 2025-10-08 10:23:22.210059663 +0000 UTC m=+0.142519608 container start 2a7f8be23e6289f253408b45814f2802a7bd51ce2df7b7e32cf2b6cbef235f06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct  8 06:23:22 np0005475493 podman[293593]: 2025-10-08 10:23:22.214164396 +0000 UTC m=+0.146624341 container attach 2a7f8be23e6289f253408b45814f2802a7bd51ce2df7b7e32cf2b6cbef235f06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct  8 06:23:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:22.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]: {
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:    "1": [
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:        {
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:            "devices": [
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:                "/dev/loop3"
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:            ],
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:            "lv_name": "ceph_lv0",
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:            "lv_size": "21470642176",
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:            "name": "ceph_lv0",
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:            "tags": {
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:                "ceph.cluster_name": "ceph",
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:                "ceph.crush_device_class": "",
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:                "ceph.encrypted": "0",
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:                "ceph.osd_id": "1",
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:                "ceph.type": "block",
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:                "ceph.vdo": "0",
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:                "ceph.with_tpm": "0"
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:            },
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:            "type": "block",
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:            "vg_name": "ceph_vg0"
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:        }
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]:    ]
Oct  8 06:23:22 np0005475493 mystifying_snyder[293610]: }
Oct  8 06:23:22 np0005475493 systemd[1]: libpod-2a7f8be23e6289f253408b45814f2802a7bd51ce2df7b7e32cf2b6cbef235f06.scope: Deactivated successfully.
Oct  8 06:23:22 np0005475493 podman[293593]: 2025-10-08 10:23:22.489054851 +0000 UTC m=+0.421514816 container died 2a7f8be23e6289f253408b45814f2802a7bd51ce2df7b7e32cf2b6cbef235f06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_snyder, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  8 06:23:22 np0005475493 systemd[1]: var-lib-containers-storage-overlay-84fae1783fc5ade0e792a82d05e16ffb5f9a2dec5e547529a97870454494780e-merged.mount: Deactivated successfully.
Oct  8 06:23:22 np0005475493 podman[293593]: 2025-10-08 10:23:22.53066992 +0000 UTC m=+0.463129865 container remove 2a7f8be23e6289f253408b45814f2802a7bd51ce2df7b7e32cf2b6cbef235f06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  8 06:23:22 np0005475493 systemd[1]: libpod-conmon-2a7f8be23e6289f253408b45814f2802a7bd51ce2df7b7e32cf2b6cbef235f06.scope: Deactivated successfully.
Oct  8 06:23:23 np0005475493 podman[293721]: 2025-10-08 10:23:23.104594893 +0000 UTC m=+0.039589044 container create ef05bdb101f5fc90a958d2fcf24dfbed515afab88c9f72236f5db06afde7af36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_curran, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  8 06:23:23 np0005475493 systemd[1]: Started libpod-conmon-ef05bdb101f5fc90a958d2fcf24dfbed515afab88c9f72236f5db06afde7af36.scope.
Oct  8 06:23:23 np0005475493 podman[293721]: 2025-10-08 10:23:23.087256131 +0000 UTC m=+0.022250202 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:23:23 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:23:23 np0005475493 podman[293721]: 2025-10-08 10:23:23.226660127 +0000 UTC m=+0.161654238 container init ef05bdb101f5fc90a958d2fcf24dfbed515afab88c9f72236f5db06afde7af36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_curran, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:23:23 np0005475493 podman[293721]: 2025-10-08 10:23:23.233980744 +0000 UTC m=+0.168974775 container start ef05bdb101f5fc90a958d2fcf24dfbed515afab88c9f72236f5db06afde7af36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True)
Oct  8 06:23:23 np0005475493 podman[293721]: 2025-10-08 10:23:23.238740909 +0000 UTC m=+0.173735030 container attach ef05bdb101f5fc90a958d2fcf24dfbed515afab88c9f72236f5db06afde7af36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Oct  8 06:23:23 np0005475493 adoring_curran[293737]: 167 167
Oct  8 06:23:23 np0005475493 systemd[1]: libpod-ef05bdb101f5fc90a958d2fcf24dfbed515afab88c9f72236f5db06afde7af36.scope: Deactivated successfully.
Oct  8 06:23:23 np0005475493 podman[293721]: 2025-10-08 10:23:23.243341797 +0000 UTC m=+0.178335868 container died ef05bdb101f5fc90a958d2fcf24dfbed515afab88c9f72236f5db06afde7af36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  8 06:23:23 np0005475493 systemd[1]: var-lib-containers-storage-overlay-493c6bc88c9e786d2d14eda2c830b9bd5258e42342aad5a45834caed28fb85f7-merged.mount: Deactivated successfully.
Oct  8 06:23:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:23.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:23 np0005475493 podman[293721]: 2025-10-08 10:23:23.301420189 +0000 UTC m=+0.236414260 container remove ef05bdb101f5fc90a958d2fcf24dfbed515afab88c9f72236f5db06afde7af36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_curran, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct  8 06:23:23 np0005475493 systemd[1]: libpod-conmon-ef05bdb101f5fc90a958d2fcf24dfbed515afab88c9f72236f5db06afde7af36.scope: Deactivated successfully.
Oct  8 06:23:23 np0005475493 podman[293764]: 2025-10-08 10:23:23.506744921 +0000 UTC m=+0.061664429 container create 894877bebfae276a4daebfbeeefadad43e8b80962bccd95d6b62f5ff29b3af08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  8 06:23:23 np0005475493 systemd[1]: Started libpod-conmon-894877bebfae276a4daebfbeeefadad43e8b80962bccd95d6b62f5ff29b3af08.scope.
Oct  8 06:23:23 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:23:23 np0005475493 podman[293764]: 2025-10-08 10:23:23.485977608 +0000 UTC m=+0.040897166 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:23:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/351de4da0cb190bf50f8ac8c60a325efcc4167989c189fe6dd386af2a463bc3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:23:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/351de4da0cb190bf50f8ac8c60a325efcc4167989c189fe6dd386af2a463bc3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:23:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/351de4da0cb190bf50f8ac8c60a325efcc4167989c189fe6dd386af2a463bc3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:23:23 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/351de4da0cb190bf50f8ac8c60a325efcc4167989c189fe6dd386af2a463bc3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:23:23 np0005475493 podman[293764]: 2025-10-08 10:23:23.596725746 +0000 UTC m=+0.151645254 container init 894877bebfae276a4daebfbeeefadad43e8b80962bccd95d6b62f5ff29b3af08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hugle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  8 06:23:23 np0005475493 podman[293764]: 2025-10-08 10:23:23.605681876 +0000 UTC m=+0.160601394 container start 894877bebfae276a4daebfbeeefadad43e8b80962bccd95d6b62f5ff29b3af08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hugle, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True)
Oct  8 06:23:23 np0005475493 podman[293764]: 2025-10-08 10:23:23.61135262 +0000 UTC m=+0.166272118 container attach 894877bebfae276a4daebfbeeefadad43e8b80962bccd95d6b62f5ff29b3af08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hugle, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:23:23 np0005475493 nova_compute[262220]: 2025-10-08 10:23:23.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:23:23 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1166: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1 op/s
Oct  8 06:23:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:23:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:23:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:23:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:23:24 np0005475493 lvm[293855]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:23:24 np0005475493 lvm[293855]: VG ceph_vg0 finished
Oct  8 06:23:24 np0005475493 ecstatic_hugle[293780]: {}
Oct  8 06:23:24 np0005475493 systemd[1]: libpod-894877bebfae276a4daebfbeeefadad43e8b80962bccd95d6b62f5ff29b3af08.scope: Deactivated successfully.
Oct  8 06:23:24 np0005475493 podman[293764]: 2025-10-08 10:23:24.265193592 +0000 UTC m=+0.820113080 container died 894877bebfae276a4daebfbeeefadad43e8b80962bccd95d6b62f5ff29b3af08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:23:24 np0005475493 systemd[1]: libpod-894877bebfae276a4daebfbeeefadad43e8b80962bccd95d6b62f5ff29b3af08.scope: Consumed 1.071s CPU time.
Oct  8 06:23:24 np0005475493 systemd[1]: var-lib-containers-storage-overlay-351de4da0cb190bf50f8ac8c60a325efcc4167989c189fe6dd386af2a463bc3f-merged.mount: Deactivated successfully.
Oct  8 06:23:24 np0005475493 podman[293764]: 2025-10-08 10:23:24.315093408 +0000 UTC m=+0.870012886 container remove 894877bebfae276a4daebfbeeefadad43e8b80962bccd95d6b62f5ff29b3af08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hugle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  8 06:23:24 np0005475493 systemd[1]: libpod-conmon-894877bebfae276a4daebfbeeefadad43e8b80962bccd95d6b62f5ff29b3af08.scope: Deactivated successfully.
Oct  8 06:23:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:23:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:23:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:23:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:23:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:24.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:24 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:23:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:25.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:25 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:23:25 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:23:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:23:25] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:23:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:23:25] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:23:25 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1167: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  8 06:23:25 np0005475493 nova_compute[262220]: 2025-10-08 10:23:25.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:23:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:23:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:26.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:23:26 np0005475493 podman[293898]: 2025-10-08 10:23:26.928066639 +0000 UTC m=+0.076531561 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  8 06:23:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:27.218Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:23:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:27.219Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:23:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:27.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:27 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1168: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  8 06:23:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:23:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:28.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:23:28 np0005475493 nova_compute[262220]: 2025-10-08 10:23:28.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:23:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:28.867Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:23:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:23:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:23:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:23:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:23:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:29.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:23:29 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1169: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  8 06:23:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:30.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:30 np0005475493 podman[293955]: 2025-10-08 10:23:30.891691674 +0000 UTC m=+0.050479477 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  8 06:23:30 np0005475493 podman[293954]: 2025-10-08 10:23:30.895118395 +0000 UTC m=+0.055322373 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  8 06:23:30 np0005475493 nova_compute[262220]: 2025-10-08 10:23:30.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:23:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:31.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:31 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1170: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:23:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:32.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:23:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:23:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:33.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:33 np0005475493 nova_compute[262220]: 2025-10-08 10:23:33.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:23:33 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1171: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:23:33 np0005475493 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  8 06:23:33 np0005475493 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  8 06:23:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:23:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:23:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:23:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:23:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:23:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:34.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:35.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:23:35] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct  8 06:23:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:23:35] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct  8 06:23:35 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1172: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:23:35 np0005475493 nova_compute[262220]: 2025-10-08 10:23:35.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:23:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:36.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:37.221Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:23:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:37.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:37 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1173: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:23:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:23:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:38.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:23:38 np0005475493 nova_compute[262220]: 2025-10-08 10:23:38.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:23:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:38.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:23:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:23:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:23:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:23:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:23:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:39.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:23:39 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1174: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:23:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:40.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:40 np0005475493 nova_compute[262220]: 2025-10-08 10:23:40.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:23:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:23:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:41.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:23:41 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1175: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:23:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:23:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:42.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:23:42 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-crash-compute-0[78863]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Oct  8 06:23:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:43.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:43 np0005475493 nova_compute[262220]: 2025-10-08 10:23:43.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:23:43 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1176: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:23:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:23:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:23:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:23:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:23:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:23:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:44.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:45.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:23:45] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct  8 06:23:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:23:45] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct  8 06:23:45 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1177: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:23:45 np0005475493 nova_compute[262220]: 2025-10-08 10:23:45.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:23:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:46.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:46 np0005475493 podman[294011]: 2025-10-08 10:23:46.917114886 +0000 UTC m=+0.070652860 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  8 06:23:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:47.222Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:23:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:23:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:47.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:23:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:23:47
Oct  8 06:23:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:23:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:23:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', '.nfs', '.rgw.root', 'images', 'cephfs.cephfs.data', '.mgr', 'backups', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'vms']
Oct  8 06:23:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:23:47 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1178: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:23:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:23:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:23:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:23:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:23:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:23:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:48.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:48 np0005475493 nova_compute[262220]: 2025-10-08 10:23:48.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:23:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:48.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:23:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:23:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:23:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:23:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:23:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:49.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:23:49 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1179: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:23:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:50.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:50 np0005475493 nova_compute[262220]: 2025-10-08 10:23:50.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:23:50 np0005475493 nova_compute[262220]: 2025-10-08 10:23:50.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:23:50 np0005475493 nova_compute[262220]: 2025-10-08 10:23:50.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:23:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:23:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:51.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:23:51 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1180: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:23:51 np0005475493 nova_compute[262220]: 2025-10-08 10:23:51.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:23:51 np0005475493 nova_compute[262220]: 2025-10-08 10:23:51.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  8 06:23:51 np0005475493 nova_compute[262220]: 2025-10-08 10:23:51.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  8 06:23:52 np0005475493 nova_compute[262220]: 2025-10-08 10:23:52.017 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  8 06:23:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:52.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:52 np0005475493 nova_compute[262220]: 2025-10-08 10:23:52.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:23:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:53.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:53 np0005475493 nova_compute[262220]: 2025-10-08 10:23:53.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:23:53 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1181: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:23:53 np0005475493 nova_compute[262220]: 2025-10-08 10:23:53.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:23:53 np0005475493 nova_compute[262220]: 2025-10-08 10:23:53.907 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:23:53 np0005475493 nova_compute[262220]: 2025-10-08 10:23:53.907 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:23:53 np0005475493 nova_compute[262220]: 2025-10-08 10:23:53.907 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:23:53 np0005475493 nova_compute[262220]: 2025-10-08 10:23:53.907 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:23:53 np0005475493 nova_compute[262220]: 2025-10-08 10:23:53.907 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:23:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:23:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:23:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:23:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:23:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:23:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:54.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:23:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1942011370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:23:54 np0005475493 nova_compute[262220]: 2025-10-08 10:23:54.488 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:23:54 np0005475493 nova_compute[262220]: 2025-10-08 10:23:54.659 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:23:54 np0005475493 nova_compute[262220]: 2025-10-08 10:23:54.660 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4373MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:23:54 np0005475493 nova_compute[262220]: 2025-10-08 10:23:54.661 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:23:54 np0005475493 nova_compute[262220]: 2025-10-08 10:23:54.661 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:23:54 np0005475493 nova_compute[262220]: 2025-10-08 10:23:54.728 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:23:54 np0005475493 nova_compute[262220]: 2025-10-08 10:23:54.729 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:23:54 np0005475493 nova_compute[262220]: 2025-10-08 10:23:54.795 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:23:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:23:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3956368669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:23:55 np0005475493 nova_compute[262220]: 2025-10-08 10:23:55.228 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:23:55 np0005475493 nova_compute[262220]: 2025-10-08 10:23:55.234 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:23:55 np0005475493 nova_compute[262220]: 2025-10-08 10:23:55.252 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:23:55 np0005475493 nova_compute[262220]: 2025-10-08 10:23:55.254 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:23:55 np0005475493 nova_compute[262220]: 2025-10-08 10:23:55.254 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:23:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:23:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:55.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:23:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:23:55] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct  8 06:23:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:23:55] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct  8 06:23:55 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1182: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:23:55 np0005475493 nova_compute[262220]: 2025-10-08 10:23:55.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:23:56 np0005475493 nova_compute[262220]: 2025-10-08 10:23:56.249 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:23:56 np0005475493 nova_compute[262220]: 2025-10-08 10:23:56.250 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:23:56 np0005475493 nova_compute[262220]: 2025-10-08 10:23:56.250 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:23:56 np0005475493 nova_compute[262220]: 2025-10-08 10:23:56.250 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  8 06:23:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:56.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:57.223Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:23:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:57.224Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:23:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:57.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:23:57.422 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:23:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:23:57.422 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:23:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:23:57.423 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:23:57 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1183: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:23:57 np0005475493 podman[294112]: 2025-10-08 10:23:57.977009963 +0000 UTC m=+0.141653959 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct  8 06:23:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:23:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:23:58.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:23:58 np0005475493 nova_compute[262220]: 2025-10-08 10:23:58.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:23:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:23:58.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:23:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:23:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:23:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:23:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:23:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:23:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:23:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:23:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:23:59.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:23:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:23:59 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1184: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:24:00 np0005475493 systemd[1]: session-58.scope: Deactivated successfully.
Oct  8 06:24:00 np0005475493 systemd[1]: session-58.scope: Consumed 2min 54.816s CPU time, 750.2M memory peak, read 228.7M from disk, written 101.3M to disk.
Oct  8 06:24:00 np0005475493 systemd-logind[798]: Session 58 logged out. Waiting for processes to exit.
Oct  8 06:24:00 np0005475493 systemd-logind[798]: Removed session 58.
Oct  8 06:24:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:00.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:00 np0005475493 systemd-logind[798]: New session 59 of user zuul.
Oct  8 06:24:00 np0005475493 systemd[1]: Started Session 59 of User zuul.
Oct  8 06:24:00 np0005475493 systemd[1]: session-59.scope: Deactivated successfully.
Oct  8 06:24:00 np0005475493 systemd-logind[798]: Session 59 logged out. Waiting for processes to exit.
Oct  8 06:24:00 np0005475493 systemd-logind[798]: Removed session 59.
Oct  8 06:24:00 np0005475493 systemd-logind[798]: New session 60 of user zuul.
Oct  8 06:24:00 np0005475493 systemd[1]: Started Session 60 of User zuul.
Oct  8 06:24:00 np0005475493 nova_compute[262220]: 2025-10-08 10:24:00.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:24:00 np0005475493 podman[294175]: 2025-10-08 10:24:00.994094225 +0000 UTC m=+0.059801889 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  8 06:24:01 np0005475493 podman[294174]: 2025-10-08 10:24:01.000821172 +0000 UTC m=+0.069825423 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  8 06:24:01 np0005475493 systemd[1]: session-60.scope: Deactivated successfully.
Oct  8 06:24:01 np0005475493 systemd-logind[798]: Session 60 logged out. Waiting for processes to exit.
Oct  8 06:24:01 np0005475493 systemd-logind[798]: Removed session 60.
Oct  8 06:24:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:01.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:01 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1185: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:24:01 np0005475493 nova_compute[262220]: 2025-10-08 10:24:01.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:24:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:02.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:24:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:24:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:24:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:03.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:24:03 np0005475493 nova_compute[262220]: 2025-10-08 10:24:03.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:24:03 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1186: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:24:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:24:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:24:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:24:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:24:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:24:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:04.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:05.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:24:05] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct  8 06:24:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:24:05] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct  8 06:24:05 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1187: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:24:05 np0005475493 nova_compute[262220]: 2025-10-08 10:24:05.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:24:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:06.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:07.224Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:24:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:07.224Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:24:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:07.225Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:24:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:24:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:07.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:24:07 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1188: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:24:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:08.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:08 np0005475493 nova_compute[262220]: 2025-10-08 10:24:08.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:24:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:08.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:24:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:24:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:24:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:24:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:24:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:24:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:09.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:24:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:24:09 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1189: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:24:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:10.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:10 np0005475493 nova_compute[262220]: 2025-10-08 10:24:10.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:24:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:11.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:11 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1190: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:24:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:12.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:13.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:13 np0005475493 nova_compute[262220]: 2025-10-08 10:24:13.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:24:13 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1191: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:24:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:24:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:24:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:24:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:24:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:24:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:24:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:14.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:24:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:15.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:24:15] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct  8 06:24:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:24:15] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct  8 06:24:15 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1192: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:24:15 np0005475493 nova_compute[262220]: 2025-10-08 10:24:15.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:24:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:16.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:17.226Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:24:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:17.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:17 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1193: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:24:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:24:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:24:17 np0005475493 podman[294279]: 2025-10-08 10:24:17.889946334 +0000 UTC m=+0.054491305 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  8 06:24:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:24:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:24:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:24:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:24:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:24:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:24:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:18.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:18 np0005475493 nova_compute[262220]: 2025-10-08 10:24:18.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:24:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:18.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:24:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:24:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:24:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:24:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:24:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:19.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:24:19 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1194: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:24:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:24:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:20.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:24:20 np0005475493 nova_compute[262220]: 2025-10-08 10:24:20.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:24:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:21.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:21 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1195: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:24:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:22.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:24:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:23.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:24:23 np0005475493 nova_compute[262220]: 2025-10-08 10:24:23.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:24:23 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1196: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:24:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:24:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:24:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:24:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:24:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:24:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:24.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 06:24:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:24:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 06:24:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:24:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:24:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:25.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:24:25 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:24:25 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:24:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:24:25] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct  8 06:24:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:24:25] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct  8 06:24:25 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1197: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:24:25 np0005475493 nova_compute[262220]: 2025-10-08 10:24:25.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:24:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:24:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:24:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:24:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:24:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1198: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  8 06:24:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:24:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:24:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:24:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:24:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:24:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:24:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:24:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:24:26 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:24:26 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:24:26 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:24:26 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:24:26 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:24:26 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:24:26 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:24:26 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:24:26 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:24:26 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:24:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:26.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:26 np0005475493 podman[294555]: 2025-10-08 10:24:26.6963711 +0000 UTC m=+0.036789933 container create 650d8624a47bdff3e74105202e5af978437e690297f74e523d864a47500b8d07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_gould, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct  8 06:24:26 np0005475493 systemd[1]: Started libpod-conmon-650d8624a47bdff3e74105202e5af978437e690297f74e523d864a47500b8d07.scope.
Oct  8 06:24:26 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:24:26 np0005475493 podman[294555]: 2025-10-08 10:24:26.679583686 +0000 UTC m=+0.020002539 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:24:26 np0005475493 podman[294555]: 2025-10-08 10:24:26.779944657 +0000 UTC m=+0.120363510 container init 650d8624a47bdff3e74105202e5af978437e690297f74e523d864a47500b8d07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_gould, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:24:26 np0005475493 podman[294555]: 2025-10-08 10:24:26.787508122 +0000 UTC m=+0.127926955 container start 650d8624a47bdff3e74105202e5af978437e690297f74e523d864a47500b8d07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  8 06:24:26 np0005475493 podman[294555]: 2025-10-08 10:24:26.791201022 +0000 UTC m=+0.131619875 container attach 650d8624a47bdff3e74105202e5af978437e690297f74e523d864a47500b8d07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_gould, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:24:26 np0005475493 eloquent_gould[294572]: 167 167
Oct  8 06:24:26 np0005475493 systemd[1]: libpod-650d8624a47bdff3e74105202e5af978437e690297f74e523d864a47500b8d07.scope: Deactivated successfully.
Oct  8 06:24:26 np0005475493 podman[294555]: 2025-10-08 10:24:26.79300567 +0000 UTC m=+0.133424503 container died 650d8624a47bdff3e74105202e5af978437e690297f74e523d864a47500b8d07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_gould, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:24:26 np0005475493 systemd[1]: var-lib-containers-storage-overlay-f4ef0645135794cbe5475b7d69c8023008419860790f87a6d4ffae4d2051ea2b-merged.mount: Deactivated successfully.
Oct  8 06:24:26 np0005475493 podman[294555]: 2025-10-08 10:24:26.831229239 +0000 UTC m=+0.171648072 container remove 650d8624a47bdff3e74105202e5af978437e690297f74e523d864a47500b8d07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_gould, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:24:26 np0005475493 systemd[1]: libpod-conmon-650d8624a47bdff3e74105202e5af978437e690297f74e523d864a47500b8d07.scope: Deactivated successfully.
Oct  8 06:24:26 np0005475493 podman[294596]: 2025-10-08 10:24:26.985695792 +0000 UTC m=+0.035433368 container create 95db51a5615d99f0cbbcf83d4537901e3b2063934834c9f145f12a189d5349e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_bhabha, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:24:27 np0005475493 systemd[1]: Started libpod-conmon-95db51a5615d99f0cbbcf83d4537901e3b2063934834c9f145f12a189d5349e6.scope.
Oct  8 06:24:27 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:24:27 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a07cfa0ba0d8f66d6b87d0775e31d5f339e859ba057e142032162dbafb5809/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:24:27 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a07cfa0ba0d8f66d6b87d0775e31d5f339e859ba057e142032162dbafb5809/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:24:27 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a07cfa0ba0d8f66d6b87d0775e31d5f339e859ba057e142032162dbafb5809/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:24:27 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a07cfa0ba0d8f66d6b87d0775e31d5f339e859ba057e142032162dbafb5809/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:24:27 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a07cfa0ba0d8f66d6b87d0775e31d5f339e859ba057e142032162dbafb5809/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:24:27 np0005475493 podman[294596]: 2025-10-08 10:24:27.062867293 +0000 UTC m=+0.112604879 container init 95db51a5615d99f0cbbcf83d4537901e3b2063934834c9f145f12a189d5349e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:24:27 np0005475493 podman[294596]: 2025-10-08 10:24:26.970696097 +0000 UTC m=+0.020433693 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:24:27 np0005475493 podman[294596]: 2025-10-08 10:24:27.06923954 +0000 UTC m=+0.118977116 container start 95db51a5615d99f0cbbcf83d4537901e3b2063934834c9f145f12a189d5349e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:24:27 np0005475493 podman[294596]: 2025-10-08 10:24:27.073125165 +0000 UTC m=+0.122862761 container attach 95db51a5615d99f0cbbcf83d4537901e3b2063934834c9f145f12a189d5349e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:24:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:27.227Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:24:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:27.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:27 np0005475493 strange_bhabha[294613]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:24:27 np0005475493 strange_bhabha[294613]: --> All data devices are unavailable
Oct  8 06:24:27 np0005475493 systemd[1]: libpod-95db51a5615d99f0cbbcf83d4537901e3b2063934834c9f145f12a189d5349e6.scope: Deactivated successfully.
Oct  8 06:24:27 np0005475493 podman[294596]: 2025-10-08 10:24:27.432596751 +0000 UTC m=+0.482334347 container died 95db51a5615d99f0cbbcf83d4537901e3b2063934834c9f145f12a189d5349e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:24:27 np0005475493 systemd[1]: var-lib-containers-storage-overlay-28a07cfa0ba0d8f66d6b87d0775e31d5f339e859ba057e142032162dbafb5809-merged.mount: Deactivated successfully.
Oct  8 06:24:27 np0005475493 podman[294596]: 2025-10-08 10:24:27.494963111 +0000 UTC m=+0.544700717 container remove 95db51a5615d99f0cbbcf83d4537901e3b2063934834c9f145f12a189d5349e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_bhabha, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct  8 06:24:27 np0005475493 systemd[1]: libpod-conmon-95db51a5615d99f0cbbcf83d4537901e3b2063934834c9f145f12a189d5349e6.scope: Deactivated successfully.
Oct  8 06:24:28 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1199: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  8 06:24:28 np0005475493 podman[294736]: 2025-10-08 10:24:28.139249533 +0000 UTC m=+0.040246274 container create 4372fa772eab3456a76ef1cda1ceed78fe82d75f397496cb7d1b08512b30d75a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_yalow, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:24:28 np0005475493 systemd[1]: Started libpod-conmon-4372fa772eab3456a76ef1cda1ceed78fe82d75f397496cb7d1b08512b30d75a.scope.
Oct  8 06:24:28 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:24:28 np0005475493 podman[294736]: 2025-10-08 10:24:28.207741392 +0000 UTC m=+0.108738133 container init 4372fa772eab3456a76ef1cda1ceed78fe82d75f397496cb7d1b08512b30d75a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_yalow, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  8 06:24:28 np0005475493 podman[294736]: 2025-10-08 10:24:28.124581089 +0000 UTC m=+0.025577860 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:24:28 np0005475493 podman[294736]: 2025-10-08 10:24:28.219760722 +0000 UTC m=+0.120757463 container start 4372fa772eab3456a76ef1cda1ceed78fe82d75f397496cb7d1b08512b30d75a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_yalow, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  8 06:24:28 np0005475493 podman[294736]: 2025-10-08 10:24:28.223181452 +0000 UTC m=+0.124178223 container attach 4372fa772eab3456a76ef1cda1ceed78fe82d75f397496cb7d1b08512b30d75a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:24:28 np0005475493 condescending_yalow[294754]: 167 167
Oct  8 06:24:28 np0005475493 systemd[1]: libpod-4372fa772eab3456a76ef1cda1ceed78fe82d75f397496cb7d1b08512b30d75a.scope: Deactivated successfully.
Oct  8 06:24:28 np0005475493 conmon[294754]: conmon 4372fa772eab3456a76e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4372fa772eab3456a76ef1cda1ceed78fe82d75f397496cb7d1b08512b30d75a.scope/container/memory.events
Oct  8 06:24:28 np0005475493 podman[294736]: 2025-10-08 10:24:28.226969705 +0000 UTC m=+0.127966446 container died 4372fa772eab3456a76ef1cda1ceed78fe82d75f397496cb7d1b08512b30d75a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_yalow, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  8 06:24:28 np0005475493 podman[294751]: 2025-10-08 10:24:28.257184254 +0000 UTC m=+0.087214137 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct  8 06:24:28 np0005475493 systemd[1]: var-lib-containers-storage-overlay-c2923532f89b5309ca7222b7e5615e36ffdcc370172c21361928cd42226676bd-merged.mount: Deactivated successfully.
Oct  8 06:24:28 np0005475493 podman[294736]: 2025-10-08 10:24:28.270265378 +0000 UTC m=+0.171262119 container remove 4372fa772eab3456a76ef1cda1ceed78fe82d75f397496cb7d1b08512b30d75a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_yalow, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  8 06:24:28 np0005475493 systemd[1]: libpod-conmon-4372fa772eab3456a76ef1cda1ceed78fe82d75f397496cb7d1b08512b30d75a.scope: Deactivated successfully.
Oct  8 06:24:28 np0005475493 podman[294799]: 2025-10-08 10:24:28.440451211 +0000 UTC m=+0.045063310 container create 65be9d359a348a3a77d8d8cd348c40b8ac2d3ce7ba866668d963aa2792880d51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct  8 06:24:28 np0005475493 systemd[1]: Started libpod-conmon-65be9d359a348a3a77d8d8cd348c40b8ac2d3ce7ba866668d963aa2792880d51.scope.
Oct  8 06:24:28 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:24:28 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc2a2a5e6d18a2fd8f250d86bd8a1ec6f92d8f750f4353925f4d31e38b6f488/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:24:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:28.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:28 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc2a2a5e6d18a2fd8f250d86bd8a1ec6f92d8f750f4353925f4d31e38b6f488/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:24:28 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc2a2a5e6d18a2fd8f250d86bd8a1ec6f92d8f750f4353925f4d31e38b6f488/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:24:28 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc2a2a5e6d18a2fd8f250d86bd8a1ec6f92d8f750f4353925f4d31e38b6f488/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:24:28 np0005475493 podman[294799]: 2025-10-08 10:24:28.515455671 +0000 UTC m=+0.120067790 container init 65be9d359a348a3a77d8d8cd348c40b8ac2d3ce7ba866668d963aa2792880d51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_borg, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  8 06:24:28 np0005475493 podman[294799]: 2025-10-08 10:24:28.424703691 +0000 UTC m=+0.029315820 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:24:28 np0005475493 podman[294799]: 2025-10-08 10:24:28.523248383 +0000 UTC m=+0.127860482 container start 65be9d359a348a3a77d8d8cd348c40b8ac2d3ce7ba866668d963aa2792880d51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_borg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct  8 06:24:28 np0005475493 podman[294799]: 2025-10-08 10:24:28.527382348 +0000 UTC m=+0.131994457 container attach 65be9d359a348a3a77d8d8cd348c40b8ac2d3ce7ba866668d963aa2792880d51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_borg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  8 06:24:28 np0005475493 laughing_borg[294815]: {
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:    "1": [
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:        {
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:            "devices": [
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:                "/dev/loop3"
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:            ],
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:            "lv_name": "ceph_lv0",
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:            "lv_size": "21470642176",
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:            "name": "ceph_lv0",
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:            "tags": {
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:                "ceph.cluster_name": "ceph",
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:                "ceph.crush_device_class": "",
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:                "ceph.encrypted": "0",
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:                "ceph.osd_id": "1",
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:                "ceph.type": "block",
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:                "ceph.vdo": "0",
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:                "ceph.with_tpm": "0"
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:            },
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:            "type": "block",
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:            "vg_name": "ceph_vg0"
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:        }
Oct  8 06:24:28 np0005475493 laughing_borg[294815]:    ]
Oct  8 06:24:28 np0005475493 laughing_borg[294815]: }
Oct  8 06:24:28 np0005475493 nova_compute[262220]: 2025-10-08 10:24:28.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:24:28 np0005475493 systemd[1]: libpod-65be9d359a348a3a77d8d8cd348c40b8ac2d3ce7ba866668d963aa2792880d51.scope: Deactivated successfully.
Oct  8 06:24:28 np0005475493 podman[294799]: 2025-10-08 10:24:28.784151826 +0000 UTC m=+0.388763935 container died 65be9d359a348a3a77d8d8cd348c40b8ac2d3ce7ba866668d963aa2792880d51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct  8 06:24:28 np0005475493 systemd[1]: var-lib-containers-storage-overlay-7cc2a2a5e6d18a2fd8f250d86bd8a1ec6f92d8f750f4353925f4d31e38b6f488-merged.mount: Deactivated successfully.
Oct  8 06:24:28 np0005475493 podman[294799]: 2025-10-08 10:24:28.824215544 +0000 UTC m=+0.428827643 container remove 65be9d359a348a3a77d8d8cd348c40b8ac2d3ce7ba866668d963aa2792880d51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_borg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  8 06:24:28 np0005475493 systemd[1]: libpod-conmon-65be9d359a348a3a77d8d8cd348c40b8ac2d3ce7ba866668d963aa2792880d51.scope: Deactivated successfully.
Oct  8 06:24:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:28.872Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:24:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:28.873Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:24:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:28.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:24:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:24:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:24:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:24:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:24:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:24:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:29.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:24:29 np0005475493 podman[294929]: 2025-10-08 10:24:29.393005211 +0000 UTC m=+0.045102512 container create a557e2c0283c053a536e0b7386e1edbc4e783d45683c9806a4988e8a1d6ec6ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  8 06:24:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:24:29 np0005475493 systemd[1]: Started libpod-conmon-a557e2c0283c053a536e0b7386e1edbc4e783d45683c9806a4988e8a1d6ec6ca.scope.
Oct  8 06:24:29 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:24:29 np0005475493 podman[294929]: 2025-10-08 10:24:29.373825079 +0000 UTC m=+0.025922420 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:24:29 np0005475493 podman[294929]: 2025-10-08 10:24:29.469659544 +0000 UTC m=+0.121756885 container init a557e2c0283c053a536e0b7386e1edbc4e783d45683c9806a4988e8a1d6ec6ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_villani, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  8 06:24:29 np0005475493 podman[294929]: 2025-10-08 10:24:29.477858489 +0000 UTC m=+0.129955780 container start a557e2c0283c053a536e0b7386e1edbc4e783d45683c9806a4988e8a1d6ec6ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_villani, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:24:29 np0005475493 podman[294929]: 2025-10-08 10:24:29.481667913 +0000 UTC m=+0.133765234 container attach a557e2c0283c053a536e0b7386e1edbc4e783d45683c9806a4988e8a1d6ec6ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  8 06:24:29 np0005475493 friendly_villani[294946]: 167 167
Oct  8 06:24:29 np0005475493 systemd[1]: libpod-a557e2c0283c053a536e0b7386e1edbc4e783d45683c9806a4988e8a1d6ec6ca.scope: Deactivated successfully.
Oct  8 06:24:29 np0005475493 podman[294929]: 2025-10-08 10:24:29.484892597 +0000 UTC m=+0.136989908 container died a557e2c0283c053a536e0b7386e1edbc4e783d45683c9806a4988e8a1d6ec6ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_villani, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  8 06:24:29 np0005475493 systemd[1]: var-lib-containers-storage-overlay-b32bf2d97f46bfbb46900474593573c9bfcdda953a2f063bc78a5efc6f6fe4f8-merged.mount: Deactivated successfully.
Oct  8 06:24:29 np0005475493 podman[294929]: 2025-10-08 10:24:29.522228256 +0000 UTC m=+0.174325547 container remove a557e2c0283c053a536e0b7386e1edbc4e783d45683c9806a4988e8a1d6ec6ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  8 06:24:29 np0005475493 systemd[1]: libpod-conmon-a557e2c0283c053a536e0b7386e1edbc4e783d45683c9806a4988e8a1d6ec6ca.scope: Deactivated successfully.
Oct  8 06:24:29 np0005475493 podman[294970]: 2025-10-08 10:24:29.707202879 +0000 UTC m=+0.041244357 container create 2ab87d596348511e71d89191a9ff8e90f10e32562b01a4dffb0999cec380d50b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_jackson, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  8 06:24:29 np0005475493 systemd[1]: Started libpod-conmon-2ab87d596348511e71d89191a9ff8e90f10e32562b01a4dffb0999cec380d50b.scope.
Oct  8 06:24:29 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:24:29 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f79ff520b54d250b3ec93f6e39f7b5ee4998dc4172fe526d19959dccfdec95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:24:29 np0005475493 podman[294970]: 2025-10-08 10:24:29.688879896 +0000 UTC m=+0.022921384 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:24:29 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f79ff520b54d250b3ec93f6e39f7b5ee4998dc4172fe526d19959dccfdec95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:24:29 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f79ff520b54d250b3ec93f6e39f7b5ee4998dc4172fe526d19959dccfdec95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:24:29 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f79ff520b54d250b3ec93f6e39f7b5ee4998dc4172fe526d19959dccfdec95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:24:29 np0005475493 podman[294970]: 2025-10-08 10:24:29.798463686 +0000 UTC m=+0.132505144 container init 2ab87d596348511e71d89191a9ff8e90f10e32562b01a4dffb0999cec380d50b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_jackson, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  8 06:24:29 np0005475493 podman[294970]: 2025-10-08 10:24:29.804178681 +0000 UTC m=+0.138220139 container start 2ab87d596348511e71d89191a9ff8e90f10e32562b01a4dffb0999cec380d50b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  8 06:24:29 np0005475493 podman[294970]: 2025-10-08 10:24:29.807121076 +0000 UTC m=+0.141162554 container attach 2ab87d596348511e71d89191a9ff8e90f10e32562b01a4dffb0999cec380d50b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  8 06:24:30 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1200: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  8 06:24:30 np0005475493 lvm[295061]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:24:30 np0005475493 lvm[295061]: VG ceph_vg0 finished
Oct  8 06:24:30 np0005475493 admiring_jackson[294986]: {}
Oct  8 06:24:30 np0005475493 systemd[1]: libpod-2ab87d596348511e71d89191a9ff8e90f10e32562b01a4dffb0999cec380d50b.scope: Deactivated successfully.
Oct  8 06:24:30 np0005475493 systemd[1]: libpod-2ab87d596348511e71d89191a9ff8e90f10e32562b01a4dffb0999cec380d50b.scope: Consumed 1.066s CPU time.
Oct  8 06:24:30 np0005475493 podman[294970]: 2025-10-08 10:24:30.485495881 +0000 UTC m=+0.819537339 container died 2ab87d596348511e71d89191a9ff8e90f10e32562b01a4dffb0999cec380d50b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_jackson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:24:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:30.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:30 np0005475493 systemd[1]: var-lib-containers-storage-overlay-a6f79ff520b54d250b3ec93f6e39f7b5ee4998dc4172fe526d19959dccfdec95-merged.mount: Deactivated successfully.
Oct  8 06:24:30 np0005475493 podman[294970]: 2025-10-08 10:24:30.535554042 +0000 UTC m=+0.869595500 container remove 2ab87d596348511e71d89191a9ff8e90f10e32562b01a4dffb0999cec380d50b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:24:30 np0005475493 systemd[1]: libpod-conmon-2ab87d596348511e71d89191a9ff8e90f10e32562b01a4dffb0999cec380d50b.scope: Deactivated successfully.
Oct  8 06:24:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:24:30 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:24:30 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:24:30 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:24:30 np0005475493 nova_compute[262220]: 2025-10-08 10:24:30.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:24:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:31.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:31 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:24:31 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:24:31 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  8 06:24:31 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 12K writes, 3376 syncs, 3.74 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1713 writes, 5631 keys, 1713 commit groups, 1.0 writes per commit group, ingest: 6.95 MB, 0.01 MB/s#012Interval WAL: 1713 writes, 710 syncs, 2.41 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  8 06:24:31 np0005475493 podman[295131]: 2025-10-08 10:24:31.930902267 +0000 UTC m=+0.076739167 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  8 06:24:31 np0005475493 podman[295130]: 2025-10-08 10:24:31.943916058 +0000 UTC m=+0.089830461 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  8 06:24:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1201: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  8 06:24:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:32.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:24:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:24:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:33.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:33 np0005475493 nova_compute[262220]: 2025-10-08 10:24:33.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:24:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:24:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:24:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:24:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:24:34 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1202: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.419331) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919074419376, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2632, "num_deletes": 505, "total_data_size": 4220989, "memory_usage": 4324656, "flush_reason": "Manual Compaction"}
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919074440673, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 4086803, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31664, "largest_seqno": 34295, "table_properties": {"data_size": 4074665, "index_size": 7160, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3909, "raw_key_size": 31602, "raw_average_key_size": 20, "raw_value_size": 4046967, "raw_average_value_size": 2627, "num_data_blocks": 306, "num_entries": 1540, "num_filter_entries": 1540, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759918885, "oldest_key_time": 1759918885, "file_creation_time": 1759919074, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 21379 microseconds, and 7085 cpu microseconds.
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.440710) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 4086803 bytes OK
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.440727) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.442249) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.442260) EVENT_LOG_v1 {"time_micros": 1759919074442257, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.442281) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 4208252, prev total WAL file size 4208252, number of live WAL files 2.
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.443169) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323630' seq:72057594037927935, type:22 .. '6B7600353131' seq:0, type:0; will stop at (end)
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3991KB)], [68(13MB)]
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919074443207, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 18078954, "oldest_snapshot_seqno": -1}
Oct  8 06:24:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:34.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6722 keys, 16562887 bytes, temperature: kUnknown
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919074529496, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 16562887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16515952, "index_size": 29031, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16837, "raw_key_size": 174881, "raw_average_key_size": 26, "raw_value_size": 16393117, "raw_average_value_size": 2438, "num_data_blocks": 1157, "num_entries": 6722, "num_filter_entries": 6722, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759919074, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.529699) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 16562887 bytes
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.530703) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.4 rd, 191.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 13.3 +0.0 blob) out(15.8 +0.0 blob), read-write-amplify(8.5) write-amplify(4.1) OK, records in: 7749, records dropped: 1027 output_compression: NoCompression
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.530719) EVENT_LOG_v1 {"time_micros": 1759919074530712, "job": 38, "event": "compaction_finished", "compaction_time_micros": 86346, "compaction_time_cpu_micros": 43046, "output_level": 6, "num_output_files": 1, "total_output_size": 16562887, "num_input_records": 7749, "num_output_records": 6722, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919074531426, "job": 38, "event": "table_file_deletion", "file_number": 70}
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919074533791, "job": 38, "event": "table_file_deletion", "file_number": 68}
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.443098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.533865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.533870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.533872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.533873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:24:34 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:34.533875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:24:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:35.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:24:35] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct  8 06:24:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:24:35] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct  8 06:24:35 np0005475493 nova_compute[262220]: 2025-10-08 10:24:35.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:24:36 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1203: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  8 06:24:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:36.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:37.228Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:24:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:37.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:38 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1204: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:24:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:24:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:38.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:24:38 np0005475493 nova_compute[262220]: 2025-10-08 10:24:38.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:24:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:38.874Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:24:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:38.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:24:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:24:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:24:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:24:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:24:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:39.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:24:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1205: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:24:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:40.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:40 np0005475493 nova_compute[262220]: 2025-10-08 10:24:40.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:24:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:24:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:41.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:24:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1206: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:24:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:42.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:24:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:43.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:24:43 np0005475493 nova_compute[262220]: 2025-10-08 10:24:43.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:24:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:24:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:24:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:24:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:24:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1207: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.256326) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919084256369, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 335, "num_deletes": 251, "total_data_size": 218124, "memory_usage": 225560, "flush_reason": "Manual Compaction"}
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919084259455, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 215844, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34296, "largest_seqno": 34630, "table_properties": {"data_size": 213690, "index_size": 318, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5302, "raw_average_key_size": 18, "raw_value_size": 209540, "raw_average_value_size": 730, "num_data_blocks": 14, "num_entries": 287, "num_filter_entries": 287, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759919074, "oldest_key_time": 1759919074, "file_creation_time": 1759919084, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 3162 microseconds, and 1031 cpu microseconds.
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.259493) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 215844 bytes OK
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.259510) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.261541) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.261557) EVENT_LOG_v1 {"time_micros": 1759919084261552, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.261574) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 215840, prev total WAL file size 215840, number of live WAL files 2.
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.261925) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(210KB)], [71(15MB)]
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919084261956, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 16778731, "oldest_snapshot_seqno": -1}
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6499 keys, 14677691 bytes, temperature: kUnknown
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919084330581, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 14677691, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14633718, "index_size": 26647, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16261, "raw_key_size": 170929, "raw_average_key_size": 26, "raw_value_size": 14516052, "raw_average_value_size": 2233, "num_data_blocks": 1051, "num_entries": 6499, "num_filter_entries": 6499, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759919084, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.330818) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 14677691 bytes
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.331872) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 244.2 rd, 213.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 15.8 +0.0 blob) out(14.0 +0.0 blob), read-write-amplify(145.7) write-amplify(68.0) OK, records in: 7009, records dropped: 510 output_compression: NoCompression
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.331887) EVENT_LOG_v1 {"time_micros": 1759919084331880, "job": 40, "event": "compaction_finished", "compaction_time_micros": 68697, "compaction_time_cpu_micros": 29894, "output_level": 6, "num_output_files": 1, "total_output_size": 14677691, "num_input_records": 7009, "num_output_records": 6499, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919084332017, "job": 40, "event": "table_file_deletion", "file_number": 73}
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919084334965, "job": 40, "event": "table_file_deletion", "file_number": 71}
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.261858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.335073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.335078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.335079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.335082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:24:44.335083) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:24:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:24:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:44.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:24:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:45.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:24:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:24:45] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct  8 06:24:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:24:45] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct  8 06:24:45 np0005475493 nova_compute[262220]: 2025-10-08 10:24:45.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:24:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1208: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:24:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:46.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:47.229Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:24:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:24:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:47.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:24:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:24:47
Oct  8 06:24:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:24:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:24:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['.nfs', 'default.rgw.meta', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'backups', 'images', 'default.rgw.control', '.mgr', 'vms', 'default.rgw.log', 'cephfs.cephfs.data']
Oct  8 06:24:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:24:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:24:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:24:47 np0005475493 nova_compute[262220]: 2025-10-08 10:24:47.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:24:47 np0005475493 nova_compute[262220]: 2025-10-08 10:24:47.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  8 06:24:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:24:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1209: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:24:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:24:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:48.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:48 np0005475493 nova_compute[262220]: 2025-10-08 10:24:48.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:24:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:48.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:24:48 np0005475493 podman[295185]: 2025-10-08 10:24:48.925386449 +0000 UTC m=+0.079531418 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  8 06:24:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:24:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:24:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:24:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:24:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:49.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:24:49 np0005475493 nova_compute[262220]: 2025-10-08 10:24:49.903 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:24:50 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1210: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:24:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:50.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:50 np0005475493 nova_compute[262220]: 2025-10-08 10:24:50.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:24:51 np0005475493 nova_compute[262220]: 2025-10-08 10:24:50.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:24:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:51.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:51 np0005475493 nova_compute[262220]: 2025-10-08 10:24:51.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:24:52 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1211: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:24:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:24:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:52.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:24:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:53.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:53 np0005475493 nova_compute[262220]: 2025-10-08 10:24:53.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:24:53 np0005475493 nova_compute[262220]: 2025-10-08 10:24:53.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:24:53 np0005475493 nova_compute[262220]: 2025-10-08 10:24:53.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  8 06:24:53 np0005475493 nova_compute[262220]: 2025-10-08 10:24:53.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  8 06:24:53 np0005475493 nova_compute[262220]: 2025-10-08 10:24:53.901 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  8 06:24:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:24:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:24:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:24:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:24:54 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1212: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:24:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:24:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:24:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:54.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:24:54 np0005475493 nova_compute[262220]: 2025-10-08 10:24:54.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:24:54 np0005475493 nova_compute[262220]: 2025-10-08 10:24:54.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:24:54 np0005475493 nova_compute[262220]: 2025-10-08 10:24:54.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:24:54 np0005475493 nova_compute[262220]: 2025-10-08 10:24:54.932 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:24:54 np0005475493 nova_compute[262220]: 2025-10-08 10:24:54.932 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:24:54 np0005475493 nova_compute[262220]: 2025-10-08 10:24:54.932 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:24:54 np0005475493 nova_compute[262220]: 2025-10-08 10:24:54.933 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:24:54 np0005475493 nova_compute[262220]: 2025-10-08 10:24:54.933 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:24:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:55.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:24:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1978300500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:24:55 np0005475493 nova_compute[262220]: 2025-10-08 10:24:55.422 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:24:55 np0005475493 nova_compute[262220]: 2025-10-08 10:24:55.588 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:24:55 np0005475493 nova_compute[262220]: 2025-10-08 10:24:55.590 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4503MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:24:55 np0005475493 nova_compute[262220]: 2025-10-08 10:24:55.590 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:24:55 np0005475493 nova_compute[262220]: 2025-10-08 10:24:55.590 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:24:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:24:55] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct  8 06:24:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:24:55] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct  8 06:24:55 np0005475493 nova_compute[262220]: 2025-10-08 10:24:55.873 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:24:55 np0005475493 nova_compute[262220]: 2025-10-08 10:24:55.874 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:24:56 np0005475493 nova_compute[262220]: 2025-10-08 10:24:56.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:24:56 np0005475493 nova_compute[262220]: 2025-10-08 10:24:56.046 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing inventories for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  8 06:24:56 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1213: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:24:56 np0005475493 nova_compute[262220]: 2025-10-08 10:24:56.167 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating ProviderTree inventory for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  8 06:24:56 np0005475493 nova_compute[262220]: 2025-10-08 10:24:56.167 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating inventory in ProviderTree for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  8 06:24:56 np0005475493 nova_compute[262220]: 2025-10-08 10:24:56.201 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing aggregate associations for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  8 06:24:56 np0005475493 nova_compute[262220]: 2025-10-08 10:24:56.231 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing trait associations for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2, traits: HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,HW_CPU_X86_SSE2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  8 06:24:56 np0005475493 nova_compute[262220]: 2025-10-08 10:24:56.258 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:24:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:56.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:56 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:24:56 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/323080184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:24:56 np0005475493 nova_compute[262220]: 2025-10-08 10:24:56.701 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:24:56 np0005475493 nova_compute[262220]: 2025-10-08 10:24:56.707 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:24:56 np0005475493 nova_compute[262220]: 2025-10-08 10:24:56.738 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:24:56 np0005475493 nova_compute[262220]: 2025-10-08 10:24:56.740 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:24:56 np0005475493 nova_compute[262220]: 2025-10-08 10:24:56.740 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:24:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:57.230Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:24:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:57.230Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:24:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:57.230Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:24:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:24:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:57.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:24:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:24:57.423 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:24:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:24:57.424 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:24:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:24:57.424 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:24:58 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1214: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:24:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:24:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:24:58.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:24:58 np0005475493 nova_compute[262220]: 2025-10-08 10:24:58.741 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:24:58 np0005475493 nova_compute[262220]: 2025-10-08 10:24:58.741 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:24:58 np0005475493 nova_compute[262220]: 2025-10-08 10:24:58.741 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  8 06:24:58 np0005475493 nova_compute[262220]: 2025-10-08 10:24:58.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:24:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:24:58.876Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:24:58 np0005475493 podman[295286]: 2025-10-08 10:24:58.906858259 +0000 UTC m=+0.073791941 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  8 06:24:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:24:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:24:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:24:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:24:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:24:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:24:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:24:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:24:59.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:24:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:25:00 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1215: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:25:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:00.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:01 np0005475493 nova_compute[262220]: 2025-10-08 10:25:01.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:25:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:01.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:02 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1216: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:25:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:02.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:25:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:25:02 np0005475493 nova_compute[262220]: 2025-10-08 10:25:02.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:25:02 np0005475493 podman[295317]: 2025-10-08 10:25:02.901788261 +0000 UTC m=+0.056675977 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  8 06:25:02 np0005475493 podman[295316]: 2025-10-08 10:25:02.927915667 +0000 UTC m=+0.086356378 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=multipathd)
Oct  8 06:25:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:03.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:03 np0005475493 nova_compute[262220]: 2025-10-08 10:25:03.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:25:03 np0005475493 nova_compute[262220]: 2025-10-08 10:25:03.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:25:03 np0005475493 nova_compute[262220]: 2025-10-08 10:25:03.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  8 06:25:03 np0005475493 nova_compute[262220]: 2025-10-08 10:25:03.911 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  8 06:25:03 np0005475493 nova_compute[262220]: 2025-10-08 10:25:03.911 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:25:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:25:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:25:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:25:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:25:04 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1217: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:25:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:25:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:04.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:05.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:25:05] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:25:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:25:05] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:25:06 np0005475493 nova_compute[262220]: 2025-10-08 10:25:06.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:25:06 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1218: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:25:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:06.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:06 np0005475493 nova_compute[262220]: 2025-10-08 10:25:06.963 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:25:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:07.231Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:25:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:07.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:08 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1219: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:25:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:08.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:08 np0005475493 nova_compute[262220]: 2025-10-08 10:25:08.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:25:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:08.877Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:25:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:08.877Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:25:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:25:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:25:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:25:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:25:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:09.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:25:10 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1220: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:25:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:25:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:10.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:25:11 np0005475493 nova_compute[262220]: 2025-10-08 10:25:11.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:25:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:11.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1221: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:25:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:12.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:13.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:13 np0005475493 nova_compute[262220]: 2025-10-08 10:25:13.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:25:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:25:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:25:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:25:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:25:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1222: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:25:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:25:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:25:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:14.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:25:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:25:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:15.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:25:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:25:15] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:25:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:25:15] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:25:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1223: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:25:16 np0005475493 nova_compute[262220]: 2025-10-08 10:25:16.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:25:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:16.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:17.234Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:25:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:25:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:17.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:25:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:25:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:25:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:25:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:25:18 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1224: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:25:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:25:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:25:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:25:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:25:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:25:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:18.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:25:18 np0005475493 nova_compute[262220]: 2025-10-08 10:25:18.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:25:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:18.878Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:25:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:18.878Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:25:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:18.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:25:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:25:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:25:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:25:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:25:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:19.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:25:19 np0005475493 podman[295406]: 2025-10-08 10:25:19.899766139 +0000 UTC m=+0.058899379 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  8 06:25:20 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1225: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:25:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:20.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:21 np0005475493 nova_compute[262220]: 2025-10-08 10:25:21.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:25:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:21.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:22 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1226: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:25:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:22.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:23.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:23 np0005475493 nova_compute[262220]: 2025-10-08 10:25:23.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:25:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:25:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:25:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:25:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:25:24 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1227: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:25:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:25:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:25:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:24.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:25:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:25.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:25:25] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct  8 06:25:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:25:25] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct  8 06:25:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1228: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:25:26 np0005475493 nova_compute[262220]: 2025-10-08 10:25:26.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:25:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:26.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:27.235Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:25:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:27.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:28 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1229: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:25:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:28.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:28 np0005475493 nova_compute[262220]: 2025-10-08 10:25:28.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:25:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:28.879Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:25:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:28.879Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:25:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:25:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:25:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:25:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:25:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:25:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:29.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:25:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:25:29 np0005475493 podman[295445]: 2025-10-08 10:25:29.977738977 +0000 UTC m=+0.123926366 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  8 06:25:30 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1230: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:25:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:30.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:31 np0005475493 nova_compute[262220]: 2025-10-08 10:25:31.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:25:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 06:25:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:25:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 06:25:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:25:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:31.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:31 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:25:31 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:25:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1231: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:25:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:25:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:25:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:25:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:25:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1232: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  8 06:25:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:25:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:25:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:25:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:25:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:25:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:25:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:25:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:25:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:25:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:25:32 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:25:32 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:25:32 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:25:32 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:25:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:32.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:32 np0005475493 podman[295672]: 2025-10-08 10:25:32.729987229 +0000 UTC m=+0.037428544 container create a40dc6d11bab4810312a4d915ed6251e3c1a56c26b7ea262b4152cea04ecee1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bhaskara, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  8 06:25:32 np0005475493 systemd[1]: Started libpod-conmon-a40dc6d11bab4810312a4d915ed6251e3c1a56c26b7ea262b4152cea04ecee1b.scope.
Oct  8 06:25:32 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:25:32 np0005475493 podman[295672]: 2025-10-08 10:25:32.712619316 +0000 UTC m=+0.020060641 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:25:32 np0005475493 podman[295672]: 2025-10-08 10:25:32.812341476 +0000 UTC m=+0.119782801 container init a40dc6d11bab4810312a4d915ed6251e3c1a56c26b7ea262b4152cea04ecee1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  8 06:25:32 np0005475493 podman[295672]: 2025-10-08 10:25:32.819080685 +0000 UTC m=+0.126521990 container start a40dc6d11bab4810312a4d915ed6251e3c1a56c26b7ea262b4152cea04ecee1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bhaskara, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  8 06:25:32 np0005475493 podman[295672]: 2025-10-08 10:25:32.822935821 +0000 UTC m=+0.130377176 container attach a40dc6d11bab4810312a4d915ed6251e3c1a56c26b7ea262b4152cea04ecee1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bhaskara, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct  8 06:25:32 np0005475493 gracious_bhaskara[295688]: 167 167
Oct  8 06:25:32 np0005475493 systemd[1]: libpod-a40dc6d11bab4810312a4d915ed6251e3c1a56c26b7ea262b4152cea04ecee1b.scope: Deactivated successfully.
Oct  8 06:25:32 np0005475493 conmon[295688]: conmon a40dc6d11bab4810312a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a40dc6d11bab4810312a4d915ed6251e3c1a56c26b7ea262b4152cea04ecee1b.scope/container/memory.events
Oct  8 06:25:32 np0005475493 podman[295693]: 2025-10-08 10:25:32.870232423 +0000 UTC m=+0.027895765 container died a40dc6d11bab4810312a4d915ed6251e3c1a56c26b7ea262b4152cea04ecee1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bhaskara, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:25:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:25:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:25:32 np0005475493 systemd[1]: var-lib-containers-storage-overlay-bb73375c8d9b7459a967bd5dba04125ac1726fb3a93ecafd8edea0ac1be1fd87-merged.mount: Deactivated successfully.
Oct  8 06:25:32 np0005475493 podman[295693]: 2025-10-08 10:25:32.927224148 +0000 UTC m=+0.084887450 container remove a40dc6d11bab4810312a4d915ed6251e3c1a56c26b7ea262b4152cea04ecee1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:25:32 np0005475493 systemd[1]: libpod-conmon-a40dc6d11bab4810312a4d915ed6251e3c1a56c26b7ea262b4152cea04ecee1b.scope: Deactivated successfully.
Oct  8 06:25:33 np0005475493 podman[295708]: 2025-10-08 10:25:33.035821186 +0000 UTC m=+0.068918053 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:25:33 np0005475493 podman[295711]: 2025-10-08 10:25:33.057584072 +0000 UTC m=+0.079071173 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct  8 06:25:33 np0005475493 podman[295753]: 2025-10-08 10:25:33.105830074 +0000 UTC m=+0.039609283 container create 2790fea115d8fac12d5d57e4c47f33186ad09ba7721b1ff05d890b127f0435f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_cori, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:25:33 np0005475493 systemd[1]: Started libpod-conmon-2790fea115d8fac12d5d57e4c47f33186ad09ba7721b1ff05d890b127f0435f4.scope.
Oct  8 06:25:33 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:25:33 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c8ecaf8b4357673882ac52cd5eb58288e4f9251708a0e9367f27503bd31fb82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:25:33 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c8ecaf8b4357673882ac52cd5eb58288e4f9251708a0e9367f27503bd31fb82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:25:33 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c8ecaf8b4357673882ac52cd5eb58288e4f9251708a0e9367f27503bd31fb82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:25:33 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c8ecaf8b4357673882ac52cd5eb58288e4f9251708a0e9367f27503bd31fb82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:25:33 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c8ecaf8b4357673882ac52cd5eb58288e4f9251708a0e9367f27503bd31fb82/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:25:33 np0005475493 podman[295753]: 2025-10-08 10:25:33.089207966 +0000 UTC m=+0.022987185 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:25:33 np0005475493 podman[295753]: 2025-10-08 10:25:33.216579703 +0000 UTC m=+0.150358942 container init 2790fea115d8fac12d5d57e4c47f33186ad09ba7721b1ff05d890b127f0435f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_cori, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  8 06:25:33 np0005475493 podman[295753]: 2025-10-08 10:25:33.225372338 +0000 UTC m=+0.159151527 container start 2790fea115d8fac12d5d57e4c47f33186ad09ba7721b1ff05d890b127f0435f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_cori, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct  8 06:25:33 np0005475493 podman[295753]: 2025-10-08 10:25:33.228798659 +0000 UTC m=+0.162577888 container attach 2790fea115d8fac12d5d57e4c47f33186ad09ba7721b1ff05d890b127f0435f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  8 06:25:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:33.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:33 np0005475493 quizzical_cori[295769]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:25:33 np0005475493 quizzical_cori[295769]: --> All data devices are unavailable
Oct  8 06:25:33 np0005475493 systemd[1]: libpod-2790fea115d8fac12d5d57e4c47f33186ad09ba7721b1ff05d890b127f0435f4.scope: Deactivated successfully.
Oct  8 06:25:33 np0005475493 podman[295753]: 2025-10-08 10:25:33.576650357 +0000 UTC m=+0.510429546 container died 2790fea115d8fac12d5d57e4c47f33186ad09ba7721b1ff05d890b127f0435f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  8 06:25:33 np0005475493 systemd[1]: var-lib-containers-storage-overlay-2c8ecaf8b4357673882ac52cd5eb58288e4f9251708a0e9367f27503bd31fb82-merged.mount: Deactivated successfully.
Oct  8 06:25:33 np0005475493 podman[295753]: 2025-10-08 10:25:33.620161567 +0000 UTC m=+0.553940756 container remove 2790fea115d8fac12d5d57e4c47f33186ad09ba7721b1ff05d890b127f0435f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  8 06:25:33 np0005475493 systemd[1]: libpod-conmon-2790fea115d8fac12d5d57e4c47f33186ad09ba7721b1ff05d890b127f0435f4.scope: Deactivated successfully.
Oct  8 06:25:33 np0005475493 nova_compute[262220]: 2025-10-08 10:25:33.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:25:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:25:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:25:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:25:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:25:34 np0005475493 podman[295891]: 2025-10-08 10:25:34.16875543 +0000 UTC m=+0.039207262 container create b47b901dda42e314b9312652062a2a964b598aaf98a06798dd3abd589d85542a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:25:34 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1233: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  8 06:25:34 np0005475493 systemd[1]: Started libpod-conmon-b47b901dda42e314b9312652062a2a964b598aaf98a06798dd3abd589d85542a.scope.
Oct  8 06:25:34 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:25:34 np0005475493 podman[295891]: 2025-10-08 10:25:34.152782722 +0000 UTC m=+0.023234554 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:25:34 np0005475493 podman[295891]: 2025-10-08 10:25:34.255923274 +0000 UTC m=+0.126375096 container init b47b901dda42e314b9312652062a2a964b598aaf98a06798dd3abd589d85542a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  8 06:25:34 np0005475493 podman[295891]: 2025-10-08 10:25:34.262278479 +0000 UTC m=+0.132730291 container start b47b901dda42e314b9312652062a2a964b598aaf98a06798dd3abd589d85542a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_aryabhata, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  8 06:25:34 np0005475493 podman[295891]: 2025-10-08 10:25:34.265382609 +0000 UTC m=+0.135834431 container attach b47b901dda42e314b9312652062a2a964b598aaf98a06798dd3abd589d85542a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:25:34 np0005475493 beautiful_aryabhata[295907]: 167 167
Oct  8 06:25:34 np0005475493 systemd[1]: libpod-b47b901dda42e314b9312652062a2a964b598aaf98a06798dd3abd589d85542a.scope: Deactivated successfully.
Oct  8 06:25:34 np0005475493 conmon[295907]: conmon b47b901dda42e314b931 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b47b901dda42e314b9312652062a2a964b598aaf98a06798dd3abd589d85542a.scope/container/memory.events
Oct  8 06:25:34 np0005475493 podman[295891]: 2025-10-08 10:25:34.269879036 +0000 UTC m=+0.140330868 container died b47b901dda42e314b9312652062a2a964b598aaf98a06798dd3abd589d85542a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_aryabhata, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct  8 06:25:34 np0005475493 systemd[1]: var-lib-containers-storage-overlay-0ab98804145baa969a471bdd1396992643a996f3893790f8d4aa6e03dcd9bc65-merged.mount: Deactivated successfully.
Oct  8 06:25:34 np0005475493 podman[295891]: 2025-10-08 10:25:34.304246488 +0000 UTC m=+0.174698310 container remove b47b901dda42e314b9312652062a2a964b598aaf98a06798dd3abd589d85542a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_aryabhata, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  8 06:25:34 np0005475493 systemd[1]: libpod-conmon-b47b901dda42e314b9312652062a2a964b598aaf98a06798dd3abd589d85542a.scope: Deactivated successfully.
Oct  8 06:25:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:25:34 np0005475493 podman[295933]: 2025-10-08 10:25:34.459653424 +0000 UTC m=+0.041100133 container create d953a6036489ec518c191e36877007ec41cd7f1f5a870f95b5ce50937b0e9698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_gauss, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:25:34 np0005475493 systemd[1]: Started libpod-conmon-d953a6036489ec518c191e36877007ec41cd7f1f5a870f95b5ce50937b0e9698.scope.
Oct  8 06:25:34 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:25:34 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a4d41b813c9db7d29d03adf2b7d7d62e10ea118ea5450481db1b6188a0ae16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:25:34 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a4d41b813c9db7d29d03adf2b7d7d62e10ea118ea5450481db1b6188a0ae16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:25:34 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a4d41b813c9db7d29d03adf2b7d7d62e10ea118ea5450481db1b6188a0ae16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:25:34 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a4d41b813c9db7d29d03adf2b7d7d62e10ea118ea5450481db1b6188a0ae16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:25:34 np0005475493 podman[295933]: 2025-10-08 10:25:34.441188045 +0000 UTC m=+0.022634774 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:25:34 np0005475493 podman[295933]: 2025-10-08 10:25:34.540467131 +0000 UTC m=+0.121913860 container init d953a6036489ec518c191e36877007ec41cd7f1f5a870f95b5ce50937b0e9698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_gauss, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:25:34 np0005475493 podman[295933]: 2025-10-08 10:25:34.547045085 +0000 UTC m=+0.128491794 container start d953a6036489ec518c191e36877007ec41cd7f1f5a870f95b5ce50937b0e9698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct  8 06:25:34 np0005475493 podman[295933]: 2025-10-08 10:25:34.549957219 +0000 UTC m=+0.131403958 container attach d953a6036489ec518c191e36877007ec41cd7f1f5a870f95b5ce50937b0e9698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  8 06:25:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:25:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:34.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]: {
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:    "1": [
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:        {
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:            "devices": [
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:                "/dev/loop3"
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:            ],
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:            "lv_name": "ceph_lv0",
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:            "lv_size": "21470642176",
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:            "name": "ceph_lv0",
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:            "tags": {
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:                "ceph.cluster_name": "ceph",
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:                "ceph.crush_device_class": "",
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:                "ceph.encrypted": "0",
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:                "ceph.osd_id": "1",
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:                "ceph.type": "block",
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:                "ceph.vdo": "0",
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:                "ceph.with_tpm": "0"
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:            },
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:            "type": "block",
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:            "vg_name": "ceph_vg0"
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:        }
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]:    ]
Oct  8 06:25:34 np0005475493 amazing_gauss[295949]: }
Oct  8 06:25:34 np0005475493 systemd[1]: libpod-d953a6036489ec518c191e36877007ec41cd7f1f5a870f95b5ce50937b0e9698.scope: Deactivated successfully.
Oct  8 06:25:34 np0005475493 podman[295933]: 2025-10-08 10:25:34.831086126 +0000 UTC m=+0.412532855 container died d953a6036489ec518c191e36877007ec41cd7f1f5a870f95b5ce50937b0e9698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_gauss, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:25:34 np0005475493 systemd[1]: var-lib-containers-storage-overlay-c6a4d41b813c9db7d29d03adf2b7d7d62e10ea118ea5450481db1b6188a0ae16-merged.mount: Deactivated successfully.
Oct  8 06:25:34 np0005475493 podman[295933]: 2025-10-08 10:25:34.872855939 +0000 UTC m=+0.454302658 container remove d953a6036489ec518c191e36877007ec41cd7f1f5a870f95b5ce50937b0e9698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct  8 06:25:34 np0005475493 systemd[1]: libpod-conmon-d953a6036489ec518c191e36877007ec41cd7f1f5a870f95b5ce50937b0e9698.scope: Deactivated successfully.
Oct  8 06:25:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:35.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:35 np0005475493 podman[296059]: 2025-10-08 10:25:35.44702209 +0000 UTC m=+0.048146591 container create 74a78b5ab6b581e3251370dcc1389eef404d6d45bb7ec3bbbb2a15edab07ab99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:25:35 np0005475493 systemd[1]: Started libpod-conmon-74a78b5ab6b581e3251370dcc1389eef404d6d45bb7ec3bbbb2a15edab07ab99.scope.
Oct  8 06:25:35 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:25:35 np0005475493 podman[296059]: 2025-10-08 10:25:35.429142962 +0000 UTC m=+0.030267513 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:25:35 np0005475493 podman[296059]: 2025-10-08 10:25:35.540006483 +0000 UTC m=+0.141131054 container init 74a78b5ab6b581e3251370dcc1389eef404d6d45bb7ec3bbbb2a15edab07ab99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_faraday, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct  8 06:25:35 np0005475493 podman[296059]: 2025-10-08 10:25:35.549888322 +0000 UTC m=+0.151012823 container start 74a78b5ab6b581e3251370dcc1389eef404d6d45bb7ec3bbbb2a15edab07ab99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_faraday, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  8 06:25:35 np0005475493 podman[296059]: 2025-10-08 10:25:35.553688096 +0000 UTC m=+0.154812697 container attach 74a78b5ab6b581e3251370dcc1389eef404d6d45bb7ec3bbbb2a15edab07ab99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_faraday, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  8 06:25:35 np0005475493 zealous_faraday[296076]: 167 167
Oct  8 06:25:35 np0005475493 systemd[1]: libpod-74a78b5ab6b581e3251370dcc1389eef404d6d45bb7ec3bbbb2a15edab07ab99.scope: Deactivated successfully.
Oct  8 06:25:35 np0005475493 podman[296059]: 2025-10-08 10:25:35.556500027 +0000 UTC m=+0.157624568 container died 74a78b5ab6b581e3251370dcc1389eef404d6d45bb7ec3bbbb2a15edab07ab99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_faraday, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  8 06:25:35 np0005475493 systemd[1]: var-lib-containers-storage-overlay-344aa05e1b860a940e3e690741e3079f610ce6e7b3c047f3ba2bd48bd345222b-merged.mount: Deactivated successfully.
Oct  8 06:25:35 np0005475493 podman[296059]: 2025-10-08 10:25:35.603798469 +0000 UTC m=+0.204922970 container remove 74a78b5ab6b581e3251370dcc1389eef404d6d45bb7ec3bbbb2a15edab07ab99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_faraday, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  8 06:25:35 np0005475493 systemd[1]: libpod-conmon-74a78b5ab6b581e3251370dcc1389eef404d6d45bb7ec3bbbb2a15edab07ab99.scope: Deactivated successfully.
Oct  8 06:25:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:25:35] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:25:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:25:35] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:25:35 np0005475493 podman[296100]: 2025-10-08 10:25:35.762945875 +0000 UTC m=+0.041412823 container create 14e377409ab26dc4e52b9e99966d660160ec3d0172d2e760a1197724f6fc8b14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct  8 06:25:35 np0005475493 systemd[1]: Started libpod-conmon-14e377409ab26dc4e52b9e99966d660160ec3d0172d2e760a1197724f6fc8b14.scope.
Oct  8 06:25:35 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:25:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b9e2d7dd995aab026d0cad74d26189fc50d8d0040aa052a4d3afbf56ee05255/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:25:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b9e2d7dd995aab026d0cad74d26189fc50d8d0040aa052a4d3afbf56ee05255/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:25:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b9e2d7dd995aab026d0cad74d26189fc50d8d0040aa052a4d3afbf56ee05255/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:25:35 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b9e2d7dd995aab026d0cad74d26189fc50d8d0040aa052a4d3afbf56ee05255/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:25:35 np0005475493 podman[296100]: 2025-10-08 10:25:35.834851675 +0000 UTC m=+0.113318623 container init 14e377409ab26dc4e52b9e99966d660160ec3d0172d2e760a1197724f6fc8b14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_joliot, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:25:35 np0005475493 podman[296100]: 2025-10-08 10:25:35.747565857 +0000 UTC m=+0.026032795 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:25:35 np0005475493 podman[296100]: 2025-10-08 10:25:35.851132062 +0000 UTC m=+0.129599000 container start 14e377409ab26dc4e52b9e99966d660160ec3d0172d2e760a1197724f6fc8b14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_joliot, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:25:35 np0005475493 podman[296100]: 2025-10-08 10:25:35.856159595 +0000 UTC m=+0.134626573 container attach 14e377409ab26dc4e52b9e99966d660160ec3d0172d2e760a1197724f6fc8b14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct  8 06:25:36 np0005475493 nova_compute[262220]: 2025-10-08 10:25:36.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:25:36 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1234: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  8 06:25:36 np0005475493 lvm[296192]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:25:36 np0005475493 lvm[296192]: VG ceph_vg0 finished
Oct  8 06:25:36 np0005475493 gallant_joliot[296117]: {}
Oct  8 06:25:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:36.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:36 np0005475493 systemd[1]: libpod-14e377409ab26dc4e52b9e99966d660160ec3d0172d2e760a1197724f6fc8b14.scope: Deactivated successfully.
Oct  8 06:25:36 np0005475493 systemd[1]: libpod-14e377409ab26dc4e52b9e99966d660160ec3d0172d2e760a1197724f6fc8b14.scope: Consumed 1.228s CPU time.
Oct  8 06:25:36 np0005475493 podman[296100]: 2025-10-08 10:25:36.642453297 +0000 UTC m=+0.920920245 container died 14e377409ab26dc4e52b9e99966d660160ec3d0172d2e760a1197724f6fc8b14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_joliot, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:25:36 np0005475493 systemd[1]: var-lib-containers-storage-overlay-2b9e2d7dd995aab026d0cad74d26189fc50d8d0040aa052a4d3afbf56ee05255-merged.mount: Deactivated successfully.
Oct  8 06:25:36 np0005475493 podman[296100]: 2025-10-08 10:25:36.682796575 +0000 UTC m=+0.961263503 container remove 14e377409ab26dc4e52b9e99966d660160ec3d0172d2e760a1197724f6fc8b14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_joliot, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:25:36 np0005475493 systemd[1]: libpod-conmon-14e377409ab26dc4e52b9e99966d660160ec3d0172d2e760a1197724f6fc8b14.scope: Deactivated successfully.
Oct  8 06:25:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:25:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:25:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:25:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:25:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:37.236Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:25:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:37.236Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:25:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:37.238Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:25:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:37.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:37 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:25:37 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:25:38 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1235: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  8 06:25:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:38.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:38 np0005475493 nova_compute[262220]: 2025-10-08 10:25:38.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:25:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:38.881Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:25:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:25:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:25:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:25:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:25:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:25:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:39.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:25:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:25:40 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1236: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  8 06:25:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:25:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:40.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:25:41 np0005475493 nova_compute[262220]: 2025-10-08 10:25:41.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:25:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:41.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:42 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1237: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  8 06:25:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:42.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:25:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:43.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:25:43 np0005475493 nova_compute[262220]: 2025-10-08 10:25:43.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:25:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:25:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:25:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:25:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:25:44 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1238: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:25:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:25:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:44.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:45.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:25:45] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:25:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:25:45] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:25:46 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1239: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:25:46 np0005475493 nova_compute[262220]: 2025-10-08 10:25:46.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:25:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:25:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:46.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:25:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:47.239Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:25:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:47.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:25:47
Oct  8 06:25:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:25:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:25:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'images', 'backups', 'default.rgw.meta', '.mgr', 'vms', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', '.nfs']
Oct  8 06:25:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:25:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:25:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:25:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:25:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1240: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:25:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:25:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:25:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:48.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:25:48 np0005475493 nova_compute[262220]: 2025-10-08 10:25:48.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:25:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:48.884Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:25:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:25:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:25:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:25:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:25:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:25:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:49.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:50 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1241: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:25:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:50.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:50 np0005475493 podman[296272]: 2025-10-08 10:25:50.911910992 +0000 UTC m=+0.064603764 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  8 06:25:51 np0005475493 nova_compute[262220]: 2025-10-08 10:25:51.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:25:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:25:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:51.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:25:51 np0005475493 nova_compute[262220]: 2025-10-08 10:25:51.913 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:25:52 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1242: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:25:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:25:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:52.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:25:52 np0005475493 nova_compute[262220]: 2025-10-08 10:25:52.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:25:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:53.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:53 np0005475493 nova_compute[262220]: 2025-10-08 10:25:53.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:25:53 np0005475493 nova_compute[262220]: 2025-10-08 10:25:53.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:25:53 np0005475493 nova_compute[262220]: 2025-10-08 10:25:53.886 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  8 06:25:53 np0005475493 nova_compute[262220]: 2025-10-08 10:25:53.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  8 06:25:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:25:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:25:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:25:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:25:54 np0005475493 nova_compute[262220]: 2025-10-08 10:25:54.103 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  8 06:25:54 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1243: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 14 op/s
Oct  8 06:25:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:25:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:54.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:54 np0005475493 nova_compute[262220]: 2025-10-08 10:25:54.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:25:54 np0005475493 nova_compute[262220]: 2025-10-08 10:25:54.953 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:25:54 np0005475493 nova_compute[262220]: 2025-10-08 10:25:54.954 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:25:54 np0005475493 nova_compute[262220]: 2025-10-08 10:25:54.954 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:25:54 np0005475493 nova_compute[262220]: 2025-10-08 10:25:54.954 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:25:54 np0005475493 nova_compute[262220]: 2025-10-08 10:25:54.954 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:25:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:25:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/882277071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:25:55 np0005475493 nova_compute[262220]: 2025-10-08 10:25:55.410 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:25:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:55.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:55 np0005475493 nova_compute[262220]: 2025-10-08 10:25:55.609 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:25:55 np0005475493 nova_compute[262220]: 2025-10-08 10:25:55.610 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4470MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:25:55 np0005475493 nova_compute[262220]: 2025-10-08 10:25:55.610 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:25:55 np0005475493 nova_compute[262220]: 2025-10-08 10:25:55.610 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:25:55 np0005475493 nova_compute[262220]: 2025-10-08 10:25:55.698 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:25:55 np0005475493 nova_compute[262220]: 2025-10-08 10:25:55.698 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:25:55 np0005475493 nova_compute[262220]: 2025-10-08 10:25:55.725 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:25:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:25:55] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct  8 06:25:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:25:55] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct  8 06:25:56 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1244: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 0 B/s wr, 14 op/s
Oct  8 06:25:56 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:25:56 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1399501425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:25:56 np0005475493 nova_compute[262220]: 2025-10-08 10:25:56.233 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:25:56 np0005475493 nova_compute[262220]: 2025-10-08 10:25:56.239 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:25:56 np0005475493 nova_compute[262220]: 2025-10-08 10:25:56.261 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:25:56 np0005475493 nova_compute[262220]: 2025-10-08 10:25:56.262 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:25:56 np0005475493 nova_compute[262220]: 2025-10-08 10:25:56.262 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:25:56 np0005475493 nova_compute[262220]: 2025-10-08 10:25:56.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:25:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:56.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:57.240Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:25:57 np0005475493 nova_compute[262220]: 2025-10-08 10:25:57.263 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:25:57 np0005475493 nova_compute[262220]: 2025-10-08 10:25:57.263 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:25:57 np0005475493 nova_compute[262220]: 2025-10-08 10:25:57.263 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:25:57 np0005475493 nova_compute[262220]: 2025-10-08 10:25:57.263 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:25:57 np0005475493 nova_compute[262220]: 2025-10-08 10:25:57.264 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  8 06:25:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:25:57.424 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:25:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:25:57.424 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:25:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:25:57.424 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:25:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:57.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:25:58 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1245: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 0 B/s wr, 14 op/s
Oct  8 06:25:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:25:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:25:58.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:25:58 np0005475493 nova_compute[262220]: 2025-10-08 10:25:58.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:25:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:25:58.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:25:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:25:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:25:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:25:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:25:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:25:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:25:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:25:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:25:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:25:59.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:00 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1246: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 0 B/s wr, 180 op/s
Oct  8 06:26:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:26:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:00.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:26:01 np0005475493 podman[296351]: 2025-10-08 10:26:01.009217295 +0000 UTC m=+0.164716647 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:26:01 np0005475493 nova_compute[262220]: 2025-10-08 10:26:01.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:26:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:01.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:02 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1247: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s
Oct  8 06:26:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:02.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:26:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:26:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:03.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:03 np0005475493 nova_compute[262220]: 2025-10-08 10:26:03.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:26:03 np0005475493 podman[296385]: 2025-10-08 10:26:03.891970197 +0000 UTC m=+0.051757499 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  8 06:26:03 np0005475493 podman[296384]: 2025-10-08 10:26:03.898433925 +0000 UTC m=+0.060748178 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  8 06:26:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:26:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:26:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:26:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:26:04 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1248: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 0 B/s wr, 180 op/s
Oct  8 06:26:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:26:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:04.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:04 np0005475493 nova_compute[262220]: 2025-10-08 10:26:04.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:26:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:26:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:05.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:26:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:26:05] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:26:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:26:05] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:26:06 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1249: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 0 B/s wr, 166 op/s
Oct  8 06:26:06 np0005475493 nova_compute[262220]: 2025-10-08 10:26:06.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:26:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:06.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:07.240Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:26:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:07.241Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:26:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:26:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:07.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:26:08 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1250: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 0 B/s wr, 166 op/s
Oct  8 06:26:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:08.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:08 np0005475493 nova_compute[262220]: 2025-10-08 10:26:08.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:26:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:08.886Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:26:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:26:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:26:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:26:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.455223) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919169455270, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 991, "num_deletes": 251, "total_data_size": 1639369, "memory_usage": 1672104, "flush_reason": "Manual Compaction"}
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Oct  8 06:26:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:09.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919169465994, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 1031799, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34631, "largest_seqno": 35621, "table_properties": {"data_size": 1027887, "index_size": 1564, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10583, "raw_average_key_size": 20, "raw_value_size": 1019376, "raw_average_value_size": 2018, "num_data_blocks": 67, "num_entries": 505, "num_filter_entries": 505, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759919085, "oldest_key_time": 1759919085, "file_creation_time": 1759919169, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 10819 microseconds, and 3265 cpu microseconds.
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.466051) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 1031799 bytes OK
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.466065) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.468887) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.468898) EVENT_LOG_v1 {"time_micros": 1759919169468894, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.468915) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 1634785, prev total WAL file size 1635490, number of live WAL files 2.
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.469548) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303034' seq:72057594037927935, type:22 .. '6D6772737461740031323536' seq:0, type:0; will stop at (end)
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(1007KB)], [74(13MB)]
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919169469615, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 15709490, "oldest_snapshot_seqno": -1}
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6522 keys, 12181676 bytes, temperature: kUnknown
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919169544203, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 12181676, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12141286, "index_size": 23000, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 171606, "raw_average_key_size": 26, "raw_value_size": 12026902, "raw_average_value_size": 1844, "num_data_blocks": 900, "num_entries": 6522, "num_filter_entries": 6522, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759919169, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.544707) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 12181676 bytes
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.547797) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 210.0 rd, 162.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 14.0 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(27.0) write-amplify(11.8) OK, records in: 7004, records dropped: 482 output_compression: NoCompression
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.547824) EVENT_LOG_v1 {"time_micros": 1759919169547811, "job": 42, "event": "compaction_finished", "compaction_time_micros": 74801, "compaction_time_cpu_micros": 27021, "output_level": 6, "num_output_files": 1, "total_output_size": 12181676, "num_input_records": 7004, "num_output_records": 6522, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919169548301, "job": 42, "event": "table_file_deletion", "file_number": 76}
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919169551463, "job": 42, "event": "table_file_deletion", "file_number": 74}
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.469445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.551609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.551618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.551620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.551622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:26:09 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:26:09.551624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:26:10 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1251: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 166 op/s
Oct  8 06:26:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:10.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:11 np0005475493 nova_compute[262220]: 2025-10-08 10:26:11.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:26:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:11.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1252: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:26:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:12.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:26:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:13.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:26:13 np0005475493 nova_compute[262220]: 2025-10-08 10:26:13.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:26:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:26:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:26:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:26:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:26:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1253: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:26:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:26:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:14.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:15.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:26:15] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:26:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:26:15] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:26:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1254: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:26:16 np0005475493 nova_compute[262220]: 2025-10-08 10:26:16.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:26:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:16.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:17.242Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:26:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:17.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:26:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:26:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:26:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:26:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:26:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:26:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:26:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:26:18 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1255: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:26:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:18.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:18 np0005475493 nova_compute[262220]: 2025-10-08 10:26:18.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:26:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:18.887Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:26:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:26:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:26:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:26:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:26:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:26:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:19.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:20 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1256: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:26:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:26:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:20.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:26:21 np0005475493 nova_compute[262220]: 2025-10-08 10:26:21.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:26:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:21.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:21 np0005475493 podman[296474]: 2025-10-08 10:26:21.925914705 +0000 UTC m=+0.078602697 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:26:22 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1257: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:26:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:26:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:22.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:26:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:26:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:23.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:26:23 np0005475493 nova_compute[262220]: 2025-10-08 10:26:23.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:26:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:26:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:26:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:26:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:26:24 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1258: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:26:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:26:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:24.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:26:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:25.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:26:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:26:25] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct  8 06:26:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:26:25] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct  8 06:26:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1259: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:26:26 np0005475493 nova_compute[262220]: 2025-10-08 10:26:26.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:26:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:26:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:26.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:26:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:27.242Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:26:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:26:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:27.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:26:28 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1260: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:26:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:28.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:28 np0005475493 nova_compute[262220]: 2025-10-08 10:26:28.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:26:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:28.889Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:26:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:26:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:26:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:26:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:26:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:26:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:26:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:29.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:26:30 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1261: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:26:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:30.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:31 np0005475493 podman[296532]: 2025-10-08 10:26:31.220230056 +0000 UTC m=+0.127114229 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  8 06:26:31 np0005475493 nova_compute[262220]: 2025-10-08 10:26:31.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:26:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:31.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:32 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1262: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:26:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:32.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:26:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:26:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:26:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:33.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:26:33 np0005475493 nova_compute[262220]: 2025-10-08 10:26:33.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:26:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:26:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:26:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:26:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:26:34 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1263: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:26:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:26:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:34.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:34 np0005475493 podman[296564]: 2025-10-08 10:26:34.893168675 +0000 UTC m=+0.047603644 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  8 06:26:34 np0005475493 podman[296563]: 2025-10-08 10:26:34.894946303 +0000 UTC m=+0.055094637 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  8 06:26:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:35.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:26:35] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct  8 06:26:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:26:35] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct  8 06:26:36 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1264: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:26:36 np0005475493 nova_compute[262220]: 2025-10-08 10:26:36.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:26:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:36.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:37.243Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:26:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:37.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:26:37 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:26:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:26:37 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:26:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:26:37 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1265: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct  8 06:26:37 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:26:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:26:37 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:26:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:26:37 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:26:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:26:37 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:26:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:26:37 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:26:38 np0005475493 podman[296776]: 2025-10-08 10:26:38.393886226 +0000 UTC m=+0.046061234 container create e0dbdb7bd00942091dffbb3e6368da0accf856ea539f33dcca6d18ca72a3624d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_almeida, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  8 06:26:38 np0005475493 systemd[1]: Started libpod-conmon-e0dbdb7bd00942091dffbb3e6368da0accf856ea539f33dcca6d18ca72a3624d.scope.
Oct  8 06:26:38 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:26:38 np0005475493 podman[296776]: 2025-10-08 10:26:38.371871742 +0000 UTC m=+0.024046771 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:26:38 np0005475493 podman[296776]: 2025-10-08 10:26:38.478875408 +0000 UTC m=+0.131050436 container init e0dbdb7bd00942091dffbb3e6368da0accf856ea539f33dcca6d18ca72a3624d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_almeida, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:26:38 np0005475493 podman[296776]: 2025-10-08 10:26:38.487372783 +0000 UTC m=+0.139547801 container start e0dbdb7bd00942091dffbb3e6368da0accf856ea539f33dcca6d18ca72a3624d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_almeida, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:26:38 np0005475493 podman[296776]: 2025-10-08 10:26:38.490526756 +0000 UTC m=+0.142701764 container attach e0dbdb7bd00942091dffbb3e6368da0accf856ea539f33dcca6d18ca72a3624d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_almeida, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:26:38 np0005475493 hardcore_almeida[296792]: 167 167
Oct  8 06:26:38 np0005475493 systemd[1]: libpod-e0dbdb7bd00942091dffbb3e6368da0accf856ea539f33dcca6d18ca72a3624d.scope: Deactivated successfully.
Oct  8 06:26:38 np0005475493 conmon[296792]: conmon e0dbdb7bd00942091dff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e0dbdb7bd00942091dffbb3e6368da0accf856ea539f33dcca6d18ca72a3624d.scope/container/memory.events
Oct  8 06:26:38 np0005475493 podman[296776]: 2025-10-08 10:26:38.495115265 +0000 UTC m=+0.147290273 container died e0dbdb7bd00942091dffbb3e6368da0accf856ea539f33dcca6d18ca72a3624d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_almeida, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct  8 06:26:38 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:26:38 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:26:38 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:26:38 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:26:38 np0005475493 systemd[1]: var-lib-containers-storage-overlay-4b5a44ff2ca548ca1c1ca953c92e87836ad16d3b98ce4a3edd6139245d0460eb-merged.mount: Deactivated successfully.
Oct  8 06:26:38 np0005475493 podman[296776]: 2025-10-08 10:26:38.550460478 +0000 UTC m=+0.202635526 container remove e0dbdb7bd00942091dffbb3e6368da0accf856ea539f33dcca6d18ca72a3624d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:26:38 np0005475493 systemd[1]: libpod-conmon-e0dbdb7bd00942091dffbb3e6368da0accf856ea539f33dcca6d18ca72a3624d.scope: Deactivated successfully.
Oct  8 06:26:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:26:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:38.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:26:38 np0005475493 podman[296816]: 2025-10-08 10:26:38.739353127 +0000 UTC m=+0.047636585 container create 86fe57aed5fe6185e2ac8eab6ce9b9a104dd429adfee3241794a3506d8804a90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_cray, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  8 06:26:38 np0005475493 systemd[1]: Started libpod-conmon-86fe57aed5fe6185e2ac8eab6ce9b9a104dd429adfee3241794a3506d8804a90.scope.
Oct  8 06:26:38 np0005475493 podman[296816]: 2025-10-08 10:26:38.715865286 +0000 UTC m=+0.024148774 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:26:38 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:26:38 np0005475493 nova_compute[262220]: 2025-10-08 10:26:38.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:26:38 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf9dd4ca006d99d6c011cebc01515902a2671a893ef9d23f382f06be2a265085/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:26:38 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf9dd4ca006d99d6c011cebc01515902a2671a893ef9d23f382f06be2a265085/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:26:38 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf9dd4ca006d99d6c011cebc01515902a2671a893ef9d23f382f06be2a265085/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:26:38 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf9dd4ca006d99d6c011cebc01515902a2671a893ef9d23f382f06be2a265085/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:26:38 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf9dd4ca006d99d6c011cebc01515902a2671a893ef9d23f382f06be2a265085/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:26:38 np0005475493 podman[296816]: 2025-10-08 10:26:38.836102892 +0000 UTC m=+0.144386380 container init 86fe57aed5fe6185e2ac8eab6ce9b9a104dd429adfee3241794a3506d8804a90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_cray, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:26:38 np0005475493 podman[296816]: 2025-10-08 10:26:38.846315722 +0000 UTC m=+0.154599190 container start 86fe57aed5fe6185e2ac8eab6ce9b9a104dd429adfee3241794a3506d8804a90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Oct  8 06:26:38 np0005475493 podman[296816]: 2025-10-08 10:26:38.850817799 +0000 UTC m=+0.159101377 container attach 86fe57aed5fe6185e2ac8eab6ce9b9a104dd429adfee3241794a3506d8804a90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_cray, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:26:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:38.890Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:26:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:26:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:26:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:26:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:26:39 np0005475493 infallible_cray[296832]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:26:39 np0005475493 infallible_cray[296832]: --> All data devices are unavailable
Oct  8 06:26:39 np0005475493 systemd[1]: libpod-86fe57aed5fe6185e2ac8eab6ce9b9a104dd429adfee3241794a3506d8804a90.scope: Deactivated successfully.
Oct  8 06:26:39 np0005475493 conmon[296832]: conmon 86fe57aed5fe6185e2ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-86fe57aed5fe6185e2ac8eab6ce9b9a104dd429adfee3241794a3506d8804a90.scope/container/memory.events
Oct  8 06:26:39 np0005475493 podman[296816]: 2025-10-08 10:26:39.172939603 +0000 UTC m=+0.481223091 container died 86fe57aed5fe6185e2ac8eab6ce9b9a104dd429adfee3241794a3506d8804a90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:26:39 np0005475493 systemd[1]: var-lib-containers-storage-overlay-bf9dd4ca006d99d6c011cebc01515902a2671a893ef9d23f382f06be2a265085-merged.mount: Deactivated successfully.
Oct  8 06:26:39 np0005475493 podman[296816]: 2025-10-08 10:26:39.21544026 +0000 UTC m=+0.523723718 container remove 86fe57aed5fe6185e2ac8eab6ce9b9a104dd429adfee3241794a3506d8804a90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_cray, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:26:39 np0005475493 systemd[1]: libpod-conmon-86fe57aed5fe6185e2ac8eab6ce9b9a104dd429adfee3241794a3506d8804a90.scope: Deactivated successfully.
Oct  8 06:26:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:26:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:26:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:39.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:26:39 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1266: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Oct  8 06:26:39 np0005475493 podman[296950]: 2025-10-08 10:26:39.82599754 +0000 UTC m=+0.054843687 container create e8a8d8e5c5361e83463db114c1a1a3ee29079a9132ba361faff6d628d3fcc2b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_greider, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:26:39 np0005475493 systemd[1]: Started libpod-conmon-e8a8d8e5c5361e83463db114c1a1a3ee29079a9132ba361faff6d628d3fcc2b3.scope.
Oct  8 06:26:39 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:26:39 np0005475493 podman[296950]: 2025-10-08 10:26:39.801142785 +0000 UTC m=+0.029989022 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:26:39 np0005475493 podman[296950]: 2025-10-08 10:26:39.899562773 +0000 UTC m=+0.128408930 container init e8a8d8e5c5361e83463db114c1a1a3ee29079a9132ba361faff6d628d3fcc2b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_greider, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  8 06:26:39 np0005475493 podman[296950]: 2025-10-08 10:26:39.911028145 +0000 UTC m=+0.139874292 container start e8a8d8e5c5361e83463db114c1a1a3ee29079a9132ba361faff6d628d3fcc2b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_greider, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:26:39 np0005475493 podman[296950]: 2025-10-08 10:26:39.914707944 +0000 UTC m=+0.143554091 container attach e8a8d8e5c5361e83463db114c1a1a3ee29079a9132ba361faff6d628d3fcc2b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_greider, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:26:39 np0005475493 nervous_greider[296966]: 167 167
Oct  8 06:26:39 np0005475493 systemd[1]: libpod-e8a8d8e5c5361e83463db114c1a1a3ee29079a9132ba361faff6d628d3fcc2b3.scope: Deactivated successfully.
Oct  8 06:26:39 np0005475493 podman[296950]: 2025-10-08 10:26:39.918202197 +0000 UTC m=+0.147048354 container died e8a8d8e5c5361e83463db114c1a1a3ee29079a9132ba361faff6d628d3fcc2b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_greider, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:26:39 np0005475493 systemd[1]: var-lib-containers-storage-overlay-27dd9aecf00198f77429f9757298b210fd5f84c26b9d4929d59ee8189abd6859-merged.mount: Deactivated successfully.
Oct  8 06:26:39 np0005475493 podman[296950]: 2025-10-08 10:26:39.96306878 +0000 UTC m=+0.191914937 container remove e8a8d8e5c5361e83463db114c1a1a3ee29079a9132ba361faff6d628d3fcc2b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct  8 06:26:39 np0005475493 systemd[1]: libpod-conmon-e8a8d8e5c5361e83463db114c1a1a3ee29079a9132ba361faff6d628d3fcc2b3.scope: Deactivated successfully.
Oct  8 06:26:40 np0005475493 podman[296991]: 2025-10-08 10:26:40.110885399 +0000 UTC m=+0.039056506 container create 556e41c5d265f1c98abdb7e209e26defd52f7273edad1dfc1b0490a2b539ce7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:26:40 np0005475493 systemd[1]: Started libpod-conmon-556e41c5d265f1c98abdb7e209e26defd52f7273edad1dfc1b0490a2b539ce7b.scope.
Oct  8 06:26:40 np0005475493 podman[296991]: 2025-10-08 10:26:40.094837829 +0000 UTC m=+0.023008956 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:26:40 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:26:40 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bbaa3c63a6f692f0b6aaec135085a5cda3b368cca0ccc65e8a0b879dc9a5848/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:26:40 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bbaa3c63a6f692f0b6aaec135085a5cda3b368cca0ccc65e8a0b879dc9a5848/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:26:40 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bbaa3c63a6f692f0b6aaec135085a5cda3b368cca0ccc65e8a0b879dc9a5848/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:26:40 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bbaa3c63a6f692f0b6aaec135085a5cda3b368cca0ccc65e8a0b879dc9a5848/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:26:40 np0005475493 podman[296991]: 2025-10-08 10:26:40.22944202 +0000 UTC m=+0.157613197 container init 556e41c5d265f1c98abdb7e209e26defd52f7273edad1dfc1b0490a2b539ce7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:26:40 np0005475493 podman[296991]: 2025-10-08 10:26:40.240143117 +0000 UTC m=+0.168314264 container start 556e41c5d265f1c98abdb7e209e26defd52f7273edad1dfc1b0490a2b539ce7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct  8 06:26:40 np0005475493 podman[296991]: 2025-10-08 10:26:40.244346783 +0000 UTC m=+0.172517920 container attach 556e41c5d265f1c98abdb7e209e26defd52f7273edad1dfc1b0490a2b539ce7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_meninsky, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]: {
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:    "1": [
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:        {
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:            "devices": [
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:                "/dev/loop3"
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:            ],
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:            "lv_name": "ceph_lv0",
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:            "lv_size": "21470642176",
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:            "name": "ceph_lv0",
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:            "tags": {
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:                "ceph.cluster_name": "ceph",
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:                "ceph.crush_device_class": "",
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:                "ceph.encrypted": "0",
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:                "ceph.osd_id": "1",
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:                "ceph.type": "block",
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:                "ceph.vdo": "0",
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:                "ceph.with_tpm": "0"
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:            },
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:            "type": "block",
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:            "vg_name": "ceph_vg0"
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:        }
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]:    ]
Oct  8 06:26:40 np0005475493 vigilant_meninsky[297007]: }
Oct  8 06:26:40 np0005475493 systemd[1]: libpod-556e41c5d265f1c98abdb7e209e26defd52f7273edad1dfc1b0490a2b539ce7b.scope: Deactivated successfully.
Oct  8 06:26:40 np0005475493 podman[296991]: 2025-10-08 10:26:40.528531999 +0000 UTC m=+0.456703106 container died 556e41c5d265f1c98abdb7e209e26defd52f7273edad1dfc1b0490a2b539ce7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_meninsky, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:26:40 np0005475493 systemd[1]: var-lib-containers-storage-overlay-5bbaa3c63a6f692f0b6aaec135085a5cda3b368cca0ccc65e8a0b879dc9a5848-merged.mount: Deactivated successfully.
Oct  8 06:26:40 np0005475493 podman[296991]: 2025-10-08 10:26:40.572083831 +0000 UTC m=+0.500254948 container remove 556e41c5d265f1c98abdb7e209e26defd52f7273edad1dfc1b0490a2b539ce7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_meninsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:26:40 np0005475493 systemd[1]: libpod-conmon-556e41c5d265f1c98abdb7e209e26defd52f7273edad1dfc1b0490a2b539ce7b.scope: Deactivated successfully.
Oct  8 06:26:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:26:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:40.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:26:41 np0005475493 podman[297119]: 2025-10-08 10:26:41.193161941 +0000 UTC m=+0.069729780 container create 60903b22f54e3c46a4bb069cd8ce742209b0972a4b6f5500f2ee091e3d5b30bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:26:41 np0005475493 podman[297119]: 2025-10-08 10:26:41.144332049 +0000 UTC m=+0.020899868 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:26:41 np0005475493 systemd[1]: Started libpod-conmon-60903b22f54e3c46a4bb069cd8ce742209b0972a4b6f5500f2ee091e3d5b30bf.scope.
Oct  8 06:26:41 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:26:41 np0005475493 podman[297119]: 2025-10-08 10:26:41.281331307 +0000 UTC m=+0.157899116 container init 60903b22f54e3c46a4bb069cd8ce742209b0972a4b6f5500f2ee091e3d5b30bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_elion, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  8 06:26:41 np0005475493 podman[297119]: 2025-10-08 10:26:41.292304392 +0000 UTC m=+0.168872191 container start 60903b22f54e3c46a4bb069cd8ce742209b0972a4b6f5500f2ee091e3d5b30bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_elion, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct  8 06:26:41 np0005475493 podman[297119]: 2025-10-08 10:26:41.295395493 +0000 UTC m=+0.171963372 container attach 60903b22f54e3c46a4bb069cd8ce742209b0972a4b6f5500f2ee091e3d5b30bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_elion, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Oct  8 06:26:41 np0005475493 trusting_elion[297137]: 167 167
Oct  8 06:26:41 np0005475493 systemd[1]: libpod-60903b22f54e3c46a4bb069cd8ce742209b0972a4b6f5500f2ee091e3d5b30bf.scope: Deactivated successfully.
Oct  8 06:26:41 np0005475493 conmon[297137]: conmon 60903b22f54e3c46a4bb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-60903b22f54e3c46a4bb069cd8ce742209b0972a4b6f5500f2ee091e3d5b30bf.scope/container/memory.events
Oct  8 06:26:41 np0005475493 podman[297119]: 2025-10-08 10:26:41.298352348 +0000 UTC m=+0.174920147 container died 60903b22f54e3c46a4bb069cd8ce742209b0972a4b6f5500f2ee091e3d5b30bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_elion, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct  8 06:26:41 np0005475493 systemd[1]: var-lib-containers-storage-overlay-20aeb0da276dbddbd0e8626434fa93a44012a47d079b818e3359a2209a0be493-merged.mount: Deactivated successfully.
Oct  8 06:26:41 np0005475493 nova_compute[262220]: 2025-10-08 10:26:41.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:26:41 np0005475493 podman[297119]: 2025-10-08 10:26:41.338187249 +0000 UTC m=+0.214755048 container remove 60903b22f54e3c46a4bb069cd8ce742209b0972a4b6f5500f2ee091e3d5b30bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_elion, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  8 06:26:41 np0005475493 systemd[1]: libpod-conmon-60903b22f54e3c46a4bb069cd8ce742209b0972a4b6f5500f2ee091e3d5b30bf.scope: Deactivated successfully.
Oct  8 06:26:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:41.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:41 np0005475493 podman[297161]: 2025-10-08 10:26:41.534895782 +0000 UTC m=+0.061120721 container create 3b9a5085d2d69327c54b1072a76209b0edcdfb819fb4a6a1d1018f9b6090dbbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ritchie, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:26:41 np0005475493 systemd[1]: Started libpod-conmon-3b9a5085d2d69327c54b1072a76209b0edcdfb819fb4a6a1d1018f9b6090dbbb.scope.
Oct  8 06:26:41 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:26:41 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de3d46992a61eea6338f48c53ed6152f0af869b81a26e841d884b358650c7698/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:26:41 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de3d46992a61eea6338f48c53ed6152f0af869b81a26e841d884b358650c7698/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:26:41 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de3d46992a61eea6338f48c53ed6152f0af869b81a26e841d884b358650c7698/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:26:41 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de3d46992a61eea6338f48c53ed6152f0af869b81a26e841d884b358650c7698/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:26:41 np0005475493 podman[297161]: 2025-10-08 10:26:41.513835429 +0000 UTC m=+0.040060398 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:26:41 np0005475493 podman[297161]: 2025-10-08 10:26:41.61819111 +0000 UTC m=+0.144416049 container init 3b9a5085d2d69327c54b1072a76209b0edcdfb819fb4a6a1d1018f9b6090dbbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ritchie, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:26:41 np0005475493 podman[297161]: 2025-10-08 10:26:41.623575665 +0000 UTC m=+0.149800584 container start 3b9a5085d2d69327c54b1072a76209b0edcdfb819fb4a6a1d1018f9b6090dbbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:26:41 np0005475493 podman[297161]: 2025-10-08 10:26:41.627047167 +0000 UTC m=+0.153272096 container attach 3b9a5085d2d69327c54b1072a76209b0edcdfb819fb4a6a1d1018f9b6090dbbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ritchie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:26:41 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1267: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct  8 06:26:42 np0005475493 lvm[297253]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:26:42 np0005475493 lvm[297253]: VG ceph_vg0 finished
Oct  8 06:26:42 np0005475493 wonderful_ritchie[297178]: {}
Oct  8 06:26:42 np0005475493 systemd[1]: libpod-3b9a5085d2d69327c54b1072a76209b0edcdfb819fb4a6a1d1018f9b6090dbbb.scope: Deactivated successfully.
Oct  8 06:26:42 np0005475493 systemd[1]: libpod-3b9a5085d2d69327c54b1072a76209b0edcdfb819fb4a6a1d1018f9b6090dbbb.scope: Consumed 1.103s CPU time.
Oct  8 06:26:42 np0005475493 podman[297161]: 2025-10-08 10:26:42.377299523 +0000 UTC m=+0.903524442 container died 3b9a5085d2d69327c54b1072a76209b0edcdfb819fb4a6a1d1018f9b6090dbbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ritchie, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct  8 06:26:42 np0005475493 systemd[1]: var-lib-containers-storage-overlay-de3d46992a61eea6338f48c53ed6152f0af869b81a26e841d884b358650c7698-merged.mount: Deactivated successfully.
Oct  8 06:26:42 np0005475493 podman[297161]: 2025-10-08 10:26:42.43863959 +0000 UTC m=+0.964864549 container remove 3b9a5085d2d69327c54b1072a76209b0edcdfb819fb4a6a1d1018f9b6090dbbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ritchie, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  8 06:26:42 np0005475493 systemd[1]: libpod-conmon-3b9a5085d2d69327c54b1072a76209b0edcdfb819fb4a6a1d1018f9b6090dbbb.scope: Deactivated successfully.
Oct  8 06:26:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:26:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:26:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:26:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:26:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:26:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:42.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:26:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:43.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:43 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:26:43 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:26:43 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1268: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Oct  8 06:26:43 np0005475493 nova_compute[262220]: 2025-10-08 10:26:43.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:26:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:26:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:26:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:26:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:26:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:26:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:44.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:45.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:26:45] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct  8 06:26:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:26:45] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct  8 06:26:45 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1269: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct  8 06:26:46 np0005475493 nova_compute[262220]: 2025-10-08 10:26:46.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:26:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:26:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:46.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:26:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:47.245Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:26:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:47.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:26:47
Oct  8 06:26:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:26:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:26:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'images', 'vms', '.nfs']
Oct  8 06:26:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:26:47 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1270: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct  8 06:26:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:26:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:26:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:26:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:26:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:26:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:48.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:48 np0005475493 nova_compute[262220]: 2025-10-08 10:26:48.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:26:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:48.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:26:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:26:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:26:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:26:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:26:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:26:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:49.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:49 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1271: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:26:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:50.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:51 np0005475493 nova_compute[262220]: 2025-10-08 10:26:51.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:26:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:26:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:51.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:26:51 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1272: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:26:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:52.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:52 np0005475493 nova_compute[262220]: 2025-10-08 10:26:52.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:26:52 np0005475493 nova_compute[262220]: 2025-10-08 10:26:52.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:26:52 np0005475493 podman[297333]: 2025-10-08 10:26:52.916806437 +0000 UTC m=+0.074785907 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  8 06:26:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:53.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:53 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1273: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:26:53 np0005475493 nova_compute[262220]: 2025-10-08 10:26:53.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:26:53 np0005475493 nova_compute[262220]: 2025-10-08 10:26:53.882 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:26:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:26:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:26:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:26:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:26:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:26:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:54.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:55.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:26:55] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct  8 06:26:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:26:55] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct  8 06:26:55 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1274: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:26:55 np0005475493 nova_compute[262220]: 2025-10-08 10:26:55.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:26:55 np0005475493 nova_compute[262220]: 2025-10-08 10:26:55.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  8 06:26:55 np0005475493 nova_compute[262220]: 2025-10-08 10:26:55.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  8 06:26:55 np0005475493 nova_compute[262220]: 2025-10-08 10:26:55.923 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  8 06:26:56 np0005475493 nova_compute[262220]: 2025-10-08 10:26:56.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:26:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:56.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:56 np0005475493 nova_compute[262220]: 2025-10-08 10:26:56.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:26:56 np0005475493 nova_compute[262220]: 2025-10-08 10:26:56.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:26:57 np0005475493 nova_compute[262220]: 2025-10-08 10:26:57.121 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:26:57 np0005475493 nova_compute[262220]: 2025-10-08 10:26:57.122 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:26:57 np0005475493 nova_compute[262220]: 2025-10-08 10:26:57.122 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:26:57 np0005475493 nova_compute[262220]: 2025-10-08 10:26:57.122 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:26:57 np0005475493 nova_compute[262220]: 2025-10-08 10:26:57.122 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:26:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:57.247Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:26:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:26:57.426 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:26:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:26:57.426 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:26:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:26:57.426 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:26:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:57.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:57 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:26:57 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4278330368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:26:57 np0005475493 nova_compute[262220]: 2025-10-08 10:26:57.566 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:26:57 np0005475493 nova_compute[262220]: 2025-10-08 10:26:57.720 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:26:57 np0005475493 nova_compute[262220]: 2025-10-08 10:26:57.721 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4464MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:26:57 np0005475493 nova_compute[262220]: 2025-10-08 10:26:57.721 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:26:57 np0005475493 nova_compute[262220]: 2025-10-08 10:26:57.722 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:26:57 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1275: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:26:57 np0005475493 nova_compute[262220]: 2025-10-08 10:26:57.939 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:26:57 np0005475493 nova_compute[262220]: 2025-10-08 10:26:57.939 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:26:57 np0005475493 nova_compute[262220]: 2025-10-08 10:26:57.955 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:26:58 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:26:58 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1826862654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:26:58 np0005475493 nova_compute[262220]: 2025-10-08 10:26:58.397 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:26:58 np0005475493 nova_compute[262220]: 2025-10-08 10:26:58.403 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:26:58 np0005475493 nova_compute[262220]: 2025-10-08 10:26:58.440 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:26:58 np0005475493 nova_compute[262220]: 2025-10-08 10:26:58.443 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:26:58 np0005475493 nova_compute[262220]: 2025-10-08 10:26:58.443 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:26:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:26:58.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:58 np0005475493 nova_compute[262220]: 2025-10-08 10:26:58.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:26:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:26:58.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:26:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:26:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:26:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:26:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:26:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:26:59 np0005475493 nova_compute[262220]: 2025-10-08 10:26:59.440 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:26:59 np0005475493 nova_compute[262220]: 2025-10-08 10:26:59.441 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:26:59 np0005475493 nova_compute[262220]: 2025-10-08 10:26:59.441 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:26:59 np0005475493 nova_compute[262220]: 2025-10-08 10:26:59.441 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  8 06:26:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:26:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:26:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:26:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:26:59.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:26:59 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1276: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:27:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:27:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:00.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:27:01 np0005475493 nova_compute[262220]: 2025-10-08 10:27:01.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:27:01 np0005475493 podman[297412]: 2025-10-08 10:27:01.420359901 +0000 UTC m=+0.107669785 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  8 06:27:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:01.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:01 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1277: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:27:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:02.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:27:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:27:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:03.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:03 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1278: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:27:03 np0005475493 nova_compute[262220]: 2025-10-08 10:27:03.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:27:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:27:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:27:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:27:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:27:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:27:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:04.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:05.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:27:05] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:27:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:27:05] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:27:05 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1279: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:27:05 np0005475493 nova_compute[262220]: 2025-10-08 10:27:05.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:27:05 np0005475493 podman[297445]: 2025-10-08 10:27:05.898297468 +0000 UTC m=+0.053172977 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  8 06:27:05 np0005475493 podman[297444]: 2025-10-08 10:27:05.904851371 +0000 UTC m=+0.066919792 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  8 06:27:06 np0005475493 nova_compute[262220]: 2025-10-08 10:27:06.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:27:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:27:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:06.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:27:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:07.248Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:27:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:27:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:07.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:27:07 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1280: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:27:08 np0005475493 nova_compute[262220]: 2025-10-08 10:27:08.509 2 DEBUG oslo_concurrency.processutils [None req-24a520ff-12fb-4617-8845-d2e911b0cf17 1a472abd070641609b2c942b11b1118f 9bebada0871a4efa9df99c6beff34c13 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:27:08 np0005475493 nova_compute[262220]: 2025-10-08 10:27:08.562 2 DEBUG oslo_concurrency.processutils [None req-24a520ff-12fb-4617-8845-d2e911b0cf17 1a472abd070641609b2c942b11b1118f 9bebada0871a4efa9df99c6beff34c13 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:27:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:08.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:08 np0005475493 nova_compute[262220]: 2025-10-08 10:27:08.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:27:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:08.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:27:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:08.893Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:27:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:27:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:27:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:27:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:27:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:27:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:09.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:09 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1281: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:27:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:27:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:10.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:27:11 np0005475493 nova_compute[262220]: 2025-10-08 10:27:11.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:27:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:11.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:11 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1282: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:27:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:12.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:13.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:13 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1283: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:27:13 np0005475493 nova_compute[262220]: 2025-10-08 10:27:13.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:27:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:27:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:27:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:27:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:27:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:27:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:27:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:14.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:27:15 np0005475493 nova_compute[262220]: 2025-10-08 10:27:15.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:27:15 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:27:15.128 163175 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2a:d6:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'b2:8b:1e:40:84:a3'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  8 06:27:15 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:27:15.129 163175 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  8 06:27:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:27:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:15.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:27:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:27:15] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:27:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:27:15] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:27:15 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1284: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:27:16 np0005475493 nova_compute[262220]: 2025-10-08 10:27:16.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:27:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:27:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:16.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:27:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:17.249Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:27:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:17.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:17 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1285: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:27:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:27:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:27:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:27:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:27:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:27:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:27:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:27:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:27:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:18.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:18 np0005475493 nova_compute[262220]: 2025-10-08 10:27:18.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:27:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:18.894Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:27:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:18.894Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:27:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:27:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:27:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:27:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:27:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:27:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:19.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:19 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1286: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:27:20 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:27:20.132 163175 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26869918-b723-425c-a2e1-0d697f3d0fec, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  8 06:27:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:20.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:21 np0005475493 nova_compute[262220]: 2025-10-08 10:27:21.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:27:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:27:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:21.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:27:21 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1287: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:27:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:22.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:23.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:23 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1288: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:27:23 np0005475493 nova_compute[262220]: 2025-10-08 10:27:23.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:27:23 np0005475493 podman[297534]: 2025-10-08 10:27:23.897973597 +0000 UTC m=+0.054133428 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  8 06:27:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:27:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:27:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:27:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:27:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:27:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:24.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:25.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:27:25] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct  8 06:27:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:27:25] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct  8 06:27:25 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1289: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:27:26 np0005475493 nova_compute[262220]: 2025-10-08 10:27:26.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:27:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:26.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:27.250Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:27:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:27.250Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:27:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:27.250Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:27:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:27:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:27.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:27:27 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1290: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:27:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:28.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:28 np0005475493 nova_compute[262220]: 2025-10-08 10:27:28.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:27:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:28.895Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:27:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:27:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:27:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:27:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:27:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:27:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:29.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:29 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1291: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:27:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:30.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:31 np0005475493 nova_compute[262220]: 2025-10-08 10:27:31.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:27:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:31.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:31 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1292: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:27:31 np0005475493 podman[297587]: 2025-10-08 10:27:31.906214407 +0000 UTC m=+0.071223321 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  8 06:27:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:32.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:27:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:27:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:27:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:33.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:27:33 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1293: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:27:33 np0005475493 nova_compute[262220]: 2025-10-08 10:27:33.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:27:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:27:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:27:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:27:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:27:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:27:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:34.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:35.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:27:35] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:27:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:27:35] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:27:35 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1294: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:27:36 np0005475493 podman[297619]: 2025-10-08 10:27:36.021968441 +0000 UTC m=+0.058502780 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  8 06:27:36 np0005475493 podman[297620]: 2025-10-08 10:27:36.026408465 +0000 UTC m=+0.054695396 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Oct  8 06:27:36 np0005475493 nova_compute[262220]: 2025-10-08 10:27:36.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:27:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:36.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:37.251Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:27:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:27:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:37.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:27:37 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1295: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:27:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:38.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:38.896Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:27:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:38.896Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:27:38 np0005475493 nova_compute[262220]: 2025-10-08 10:27:38.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:27:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:27:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:27:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:27:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:27:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:27:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:39.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:39 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1296: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:27:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:40.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:41 np0005475493 nova_compute[262220]: 2025-10-08 10:27:41.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:27:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:41.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:41 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1297: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:27:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:27:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:42.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:27:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:27:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:27:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:27:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:27:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:43.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:43 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1298: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct  8 06:27:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:27:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:27:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:27:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:27:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:27:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:27:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:27:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:27:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:27:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:27:43 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:27:43 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:27:43 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:27:43 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:27:43 np0005475493 nova_compute[262220]: 2025-10-08 10:27:43.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:27:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:27:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:27:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:27:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:27:44 np0005475493 podman[297846]: 2025-10-08 10:27:44.148909045 +0000 UTC m=+0.042221891 container create d050d6ac13beaff3fcc9517b9a5c5d9c87888afbcadf8056fed70b34a1b60f36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:27:44 np0005475493 systemd[1]: Started libpod-conmon-d050d6ac13beaff3fcc9517b9a5c5d9c87888afbcadf8056fed70b34a1b60f36.scope.
Oct  8 06:27:44 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:27:44 np0005475493 podman[297846]: 2025-10-08 10:27:44.129011159 +0000 UTC m=+0.022324025 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:27:44 np0005475493 podman[297846]: 2025-10-08 10:27:44.228015332 +0000 UTC m=+0.121328268 container init d050d6ac13beaff3fcc9517b9a5c5d9c87888afbcadf8056fed70b34a1b60f36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:27:44 np0005475493 podman[297846]: 2025-10-08 10:27:44.234153762 +0000 UTC m=+0.127466608 container start d050d6ac13beaff3fcc9517b9a5c5d9c87888afbcadf8056fed70b34a1b60f36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mclaren, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  8 06:27:44 np0005475493 ecstatic_mclaren[297862]: 167 167
Oct  8 06:27:44 np0005475493 systemd[1]: libpod-d050d6ac13beaff3fcc9517b9a5c5d9c87888afbcadf8056fed70b34a1b60f36.scope: Deactivated successfully.
Oct  8 06:27:44 np0005475493 podman[297846]: 2025-10-08 10:27:44.24025965 +0000 UTC m=+0.133572596 container attach d050d6ac13beaff3fcc9517b9a5c5d9c87888afbcadf8056fed70b34a1b60f36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct  8 06:27:44 np0005475493 podman[297846]: 2025-10-08 10:27:44.240971312 +0000 UTC m=+0.134284188 container died d050d6ac13beaff3fcc9517b9a5c5d9c87888afbcadf8056fed70b34a1b60f36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mclaren, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  8 06:27:44 np0005475493 systemd[1]: var-lib-containers-storage-overlay-c01bb07b18666073c2129c4d37dc210301cb93e1b6dec4087324bed443eb7e6e-merged.mount: Deactivated successfully.
Oct  8 06:27:44 np0005475493 podman[297846]: 2025-10-08 10:27:44.27970437 +0000 UTC m=+0.173017216 container remove d050d6ac13beaff3fcc9517b9a5c5d9c87888afbcadf8056fed70b34a1b60f36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:27:44 np0005475493 systemd[1]: libpod-conmon-d050d6ac13beaff3fcc9517b9a5c5d9c87888afbcadf8056fed70b34a1b60f36.scope: Deactivated successfully.
Oct  8 06:27:44 np0005475493 podman[297886]: 2025-10-08 10:27:44.448314491 +0000 UTC m=+0.037926072 container create 4dd8b0039db1a3af72defb8080d106ed76c4b529794f7e8d5dfda6bad1e91a88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_ride, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  8 06:27:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:27:44 np0005475493 systemd[1]: Started libpod-conmon-4dd8b0039db1a3af72defb8080d106ed76c4b529794f7e8d5dfda6bad1e91a88.scope.
Oct  8 06:27:44 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:27:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219d5eec44378df3ce78c9efe4748647782fe22a68630c44fbc59663e2b0001c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:27:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219d5eec44378df3ce78c9efe4748647782fe22a68630c44fbc59663e2b0001c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:27:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219d5eec44378df3ce78c9efe4748647782fe22a68630c44fbc59663e2b0001c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:27:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219d5eec44378df3ce78c9efe4748647782fe22a68630c44fbc59663e2b0001c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:27:44 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219d5eec44378df3ce78c9efe4748647782fe22a68630c44fbc59663e2b0001c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:27:44 np0005475493 podman[297886]: 2025-10-08 10:27:44.513440864 +0000 UTC m=+0.103052435 container init 4dd8b0039db1a3af72defb8080d106ed76c4b529794f7e8d5dfda6bad1e91a88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_ride, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct  8 06:27:44 np0005475493 podman[297886]: 2025-10-08 10:27:44.520967938 +0000 UTC m=+0.110579509 container start 4dd8b0039db1a3af72defb8080d106ed76c4b529794f7e8d5dfda6bad1e91a88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:27:44 np0005475493 podman[297886]: 2025-10-08 10:27:44.52472019 +0000 UTC m=+0.114331761 container attach 4dd8b0039db1a3af72defb8080d106ed76c4b529794f7e8d5dfda6bad1e91a88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:27:44 np0005475493 podman[297886]: 2025-10-08 10:27:44.431681751 +0000 UTC m=+0.021293342 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:27:44 np0005475493 beautiful_ride[297904]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:27:44 np0005475493 beautiful_ride[297904]: --> All data devices are unavailable
Oct  8 06:27:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:44.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:44 np0005475493 systemd[1]: libpod-4dd8b0039db1a3af72defb8080d106ed76c4b529794f7e8d5dfda6bad1e91a88.scope: Deactivated successfully.
Oct  8 06:27:44 np0005475493 podman[297886]: 2025-10-08 10:27:44.824096675 +0000 UTC m=+0.413708246 container died 4dd8b0039db1a3af72defb8080d106ed76c4b529794f7e8d5dfda6bad1e91a88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_ride, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:27:44 np0005475493 systemd[1]: var-lib-containers-storage-overlay-219d5eec44378df3ce78c9efe4748647782fe22a68630c44fbc59663e2b0001c-merged.mount: Deactivated successfully.
Oct  8 06:27:44 np0005475493 podman[297886]: 2025-10-08 10:27:44.864352301 +0000 UTC m=+0.453963872 container remove 4dd8b0039db1a3af72defb8080d106ed76c4b529794f7e8d5dfda6bad1e91a88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  8 06:27:44 np0005475493 systemd[1]: libpod-conmon-4dd8b0039db1a3af72defb8080d106ed76c4b529794f7e8d5dfda6bad1e91a88.scope: Deactivated successfully.
Oct  8 06:27:45 np0005475493 podman[298024]: 2025-10-08 10:27:45.442227312 +0000 UTC m=+0.032589778 container create a642d75b9403694e75aa2af67b642ccb92e3b1dfbb96067a0089cdbf158a9f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_noyce, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct  8 06:27:45 np0005475493 systemd[1]: Started libpod-conmon-a642d75b9403694e75aa2af67b642ccb92e3b1dfbb96067a0089cdbf158a9f8b.scope.
Oct  8 06:27:45 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:27:45 np0005475493 podman[298024]: 2025-10-08 10:27:45.504179953 +0000 UTC m=+0.094542409 container init a642d75b9403694e75aa2af67b642ccb92e3b1dfbb96067a0089cdbf158a9f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_noyce, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct  8 06:27:45 np0005475493 podman[298024]: 2025-10-08 10:27:45.512171182 +0000 UTC m=+0.102533608 container start a642d75b9403694e75aa2af67b642ccb92e3b1dfbb96067a0089cdbf158a9f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_noyce, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  8 06:27:45 np0005475493 trusting_noyce[298040]: 167 167
Oct  8 06:27:45 np0005475493 systemd[1]: libpod-a642d75b9403694e75aa2af67b642ccb92e3b1dfbb96067a0089cdbf158a9f8b.scope: Deactivated successfully.
Oct  8 06:27:45 np0005475493 podman[298024]: 2025-10-08 10:27:45.515589963 +0000 UTC m=+0.105952409 container attach a642d75b9403694e75aa2af67b642ccb92e3b1dfbb96067a0089cdbf158a9f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_noyce, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:27:45 np0005475493 podman[298024]: 2025-10-08 10:27:45.515935735 +0000 UTC m=+0.106298181 container died a642d75b9403694e75aa2af67b642ccb92e3b1dfbb96067a0089cdbf158a9f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_noyce, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  8 06:27:45 np0005475493 podman[298024]: 2025-10-08 10:27:45.428672913 +0000 UTC m=+0.019035359 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:27:45 np0005475493 systemd[1]: var-lib-containers-storage-overlay-2d4a5f9a44e8c7bf61839417a2e9ab7d9a117fe1ee38b279d980379c47d8470c-merged.mount: Deactivated successfully.
Oct  8 06:27:45 np0005475493 podman[298024]: 2025-10-08 10:27:45.551635493 +0000 UTC m=+0.141997909 container remove a642d75b9403694e75aa2af67b642ccb92e3b1dfbb96067a0089cdbf158a9f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_noyce, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:27:45 np0005475493 systemd[1]: libpod-conmon-a642d75b9403694e75aa2af67b642ccb92e3b1dfbb96067a0089cdbf158a9f8b.scope: Deactivated successfully.
Oct  8 06:27:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:45.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:45 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1299: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct  8 06:27:45 np0005475493 podman[298064]: 2025-10-08 10:27:45.704632468 +0000 UTC m=+0.036977471 container create 08bbbcb40e12965c0007bee4b9ff9da21812518f2deebcaa4fd488654ccb917f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct  8 06:27:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:27:45] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:27:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:27:45] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:27:45 np0005475493 systemd[1]: Started libpod-conmon-08bbbcb40e12965c0007bee4b9ff9da21812518f2deebcaa4fd488654ccb917f.scope.
Oct  8 06:27:45 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:27:45 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb608db7c96d5aec9d35d512cc5f2b061849b444e07716242eacefc06e90c54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:27:45 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb608db7c96d5aec9d35d512cc5f2b061849b444e07716242eacefc06e90c54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:27:45 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb608db7c96d5aec9d35d512cc5f2b061849b444e07716242eacefc06e90c54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:27:45 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb608db7c96d5aec9d35d512cc5f2b061849b444e07716242eacefc06e90c54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:27:45 np0005475493 podman[298064]: 2025-10-08 10:27:45.767285691 +0000 UTC m=+0.099630714 container init 08bbbcb40e12965c0007bee4b9ff9da21812518f2deebcaa4fd488654ccb917f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_moore, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:27:45 np0005475493 podman[298064]: 2025-10-08 10:27:45.775960332 +0000 UTC m=+0.108305335 container start 08bbbcb40e12965c0007bee4b9ff9da21812518f2deebcaa4fd488654ccb917f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_moore, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:27:45 np0005475493 podman[298064]: 2025-10-08 10:27:45.778947029 +0000 UTC m=+0.111292032 container attach 08bbbcb40e12965c0007bee4b9ff9da21812518f2deebcaa4fd488654ccb917f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_moore, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:27:45 np0005475493 podman[298064]: 2025-10-08 10:27:45.689208657 +0000 UTC m=+0.021553680 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:27:46 np0005475493 distracted_moore[298081]: {
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:    "1": [
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:        {
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:            "devices": [
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:                "/dev/loop3"
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:            ],
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:            "lv_name": "ceph_lv0",
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:            "lv_size": "21470642176",
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:            "name": "ceph_lv0",
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:            "tags": {
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:                "ceph.cluster_name": "ceph",
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:                "ceph.crush_device_class": "",
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:                "ceph.encrypted": "0",
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:                "ceph.osd_id": "1",
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:                "ceph.type": "block",
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:                "ceph.vdo": "0",
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:                "ceph.with_tpm": "0"
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:            },
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:            "type": "block",
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:            "vg_name": "ceph_vg0"
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:        }
Oct  8 06:27:46 np0005475493 distracted_moore[298081]:    ]
Oct  8 06:27:46 np0005475493 distracted_moore[298081]: }
Oct  8 06:27:46 np0005475493 systemd[1]: libpod-08bbbcb40e12965c0007bee4b9ff9da21812518f2deebcaa4fd488654ccb917f.scope: Deactivated successfully.
Oct  8 06:27:46 np0005475493 podman[298064]: 2025-10-08 10:27:46.09793497 +0000 UTC m=+0.430279983 container died 08bbbcb40e12965c0007bee4b9ff9da21812518f2deebcaa4fd488654ccb917f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_moore, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:27:46 np0005475493 systemd[1]: var-lib-containers-storage-overlay-dfb608db7c96d5aec9d35d512cc5f2b061849b444e07716242eacefc06e90c54-merged.mount: Deactivated successfully.
Oct  8 06:27:46 np0005475493 podman[298064]: 2025-10-08 10:27:46.136813031 +0000 UTC m=+0.469158034 container remove 08bbbcb40e12965c0007bee4b9ff9da21812518f2deebcaa4fd488654ccb917f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  8 06:27:46 np0005475493 systemd[1]: libpod-conmon-08bbbcb40e12965c0007bee4b9ff9da21812518f2deebcaa4fd488654ccb917f.scope: Deactivated successfully.
Oct  8 06:27:46 np0005475493 nova_compute[262220]: 2025-10-08 10:27:46.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:27:46 np0005475493 podman[298194]: 2025-10-08 10:27:46.7173488 +0000 UTC m=+0.035700819 container create 0393711c4a2c3b9e8f27d3054781efb3132bf38c8418be3bbc67e25fad4c8ce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:27:46 np0005475493 systemd[1]: Started libpod-conmon-0393711c4a2c3b9e8f27d3054781efb3132bf38c8418be3bbc67e25fad4c8ce6.scope.
Oct  8 06:27:46 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:27:46 np0005475493 podman[298194]: 2025-10-08 10:27:46.785964616 +0000 UTC m=+0.104316675 container init 0393711c4a2c3b9e8f27d3054781efb3132bf38c8418be3bbc67e25fad4c8ce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hopper, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  8 06:27:46 np0005475493 podman[298194]: 2025-10-08 10:27:46.792358204 +0000 UTC m=+0.110710233 container start 0393711c4a2c3b9e8f27d3054781efb3132bf38c8418be3bbc67e25fad4c8ce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct  8 06:27:46 np0005475493 podman[298194]: 2025-10-08 10:27:46.795158735 +0000 UTC m=+0.113510804 container attach 0393711c4a2c3b9e8f27d3054781efb3132bf38c8418be3bbc67e25fad4c8ce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hopper, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  8 06:27:46 np0005475493 determined_hopper[298211]: 167 167
Oct  8 06:27:46 np0005475493 systemd[1]: libpod-0393711c4a2c3b9e8f27d3054781efb3132bf38c8418be3bbc67e25fad4c8ce6.scope: Deactivated successfully.
Oct  8 06:27:46 np0005475493 podman[298194]: 2025-10-08 10:27:46.796826128 +0000 UTC m=+0.115178157 container died 0393711c4a2c3b9e8f27d3054781efb3132bf38c8418be3bbc67e25fad4c8ce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hopper, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:27:46 np0005475493 podman[298194]: 2025-10-08 10:27:46.702722805 +0000 UTC m=+0.021074844 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:27:46 np0005475493 systemd[1]: var-lib-containers-storage-overlay-caccd238503ba567f7ef2be4697329c61ffef5c90670dea2f230b4d60bff473e-merged.mount: Deactivated successfully.
Oct  8 06:27:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:46.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:46 np0005475493 podman[298194]: 2025-10-08 10:27:46.83076349 +0000 UTC m=+0.149115509 container remove 0393711c4a2c3b9e8f27d3054781efb3132bf38c8418be3bbc67e25fad4c8ce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hopper, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:27:46 np0005475493 systemd[1]: libpod-conmon-0393711c4a2c3b9e8f27d3054781efb3132bf38c8418be3bbc67e25fad4c8ce6.scope: Deactivated successfully.
Oct  8 06:27:46 np0005475493 podman[298237]: 2025-10-08 10:27:46.988733776 +0000 UTC m=+0.047270335 container create d170c1159463621fbaba03c2bfd7243bc5ce8a78fbeb30a934a3668420ec1669 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cerf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Oct  8 06:27:47 np0005475493 systemd[1]: Started libpod-conmon-d170c1159463621fbaba03c2bfd7243bc5ce8a78fbeb30a934a3668420ec1669.scope.
Oct  8 06:27:47 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:27:47 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b8ca8b1733ba6f661c32948c5a0ed6a43c1c096b96264ca7e1490454247e6d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:27:47 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b8ca8b1733ba6f661c32948c5a0ed6a43c1c096b96264ca7e1490454247e6d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:27:47 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b8ca8b1733ba6f661c32948c5a0ed6a43c1c096b96264ca7e1490454247e6d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:27:47 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b8ca8b1733ba6f661c32948c5a0ed6a43c1c096b96264ca7e1490454247e6d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:27:47 np0005475493 podman[298237]: 2025-10-08 10:27:46.968662555 +0000 UTC m=+0.027199144 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:27:47 np0005475493 podman[298237]: 2025-10-08 10:27:47.078019513 +0000 UTC m=+0.136556092 container init d170c1159463621fbaba03c2bfd7243bc5ce8a78fbeb30a934a3668420ec1669 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cerf, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:27:47 np0005475493 podman[298237]: 2025-10-08 10:27:47.08378087 +0000 UTC m=+0.142317429 container start d170c1159463621fbaba03c2bfd7243bc5ce8a78fbeb30a934a3668420ec1669 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cerf, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  8 06:27:47 np0005475493 podman[298237]: 2025-10-08 10:27:47.086922382 +0000 UTC m=+0.145458971 container attach d170c1159463621fbaba03c2bfd7243bc5ce8a78fbeb30a934a3668420ec1669 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cerf, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:27:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:47.251Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:27:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:47.252Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:27:47 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1300: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct  8 06:27:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:47.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:47 np0005475493 lvm[298330]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:27:47 np0005475493 lvm[298330]: VG ceph_vg0 finished
Oct  8 06:27:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:27:47
Oct  8 06:27:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:27:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:27:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'volumes', 'vms', '.mgr', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.nfs', 'backups', 'default.rgw.control']
Oct  8 06:27:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:27:47 np0005475493 admiring_cerf[298254]: {}
Oct  8 06:27:47 np0005475493 systemd[1]: libpod-d170c1159463621fbaba03c2bfd7243bc5ce8a78fbeb30a934a3668420ec1669.scope: Deactivated successfully.
Oct  8 06:27:47 np0005475493 podman[298237]: 2025-10-08 10:27:47.797344325 +0000 UTC m=+0.855880884 container died d170c1159463621fbaba03c2bfd7243bc5ce8a78fbeb30a934a3668420ec1669 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  8 06:27:47 np0005475493 systemd[1]: libpod-d170c1159463621fbaba03c2bfd7243bc5ce8a78fbeb30a934a3668420ec1669.scope: Consumed 1.077s CPU time.
Oct  8 06:27:47 np0005475493 systemd[1]: var-lib-containers-storage-overlay-1b8ca8b1733ba6f661c32948c5a0ed6a43c1c096b96264ca7e1490454247e6d5-merged.mount: Deactivated successfully.
Oct  8 06:27:47 np0005475493 podman[298237]: 2025-10-08 10:27:47.835153302 +0000 UTC m=+0.893689861 container remove d170c1159463621fbaba03c2bfd7243bc5ce8a78fbeb30a934a3668420ec1669 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cerf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  8 06:27:47 np0005475493 systemd[1]: libpod-conmon-d170c1159463621fbaba03c2bfd7243bc5ce8a78fbeb30a934a3668420ec1669.scope: Deactivated successfully.
Oct  8 06:27:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:27:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:27:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:27:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:27:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:27:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:27:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:27:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:27:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:27:48 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:27:48 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:27:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:48.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:48.897Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:27:48 np0005475493 nova_compute[262220]: 2025-10-08 10:27:48.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:27:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:27:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:27:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:27:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:27:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:27:49 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1301: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:27:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:49.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:49 np0005475493 ceph-mgr[73869]: [devicehealth INFO root] Check health
Oct  8 06:27:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:50.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:51 np0005475493 nova_compute[262220]: 2025-10-08 10:27:51.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:27:51 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1302: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct  8 06:27:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:27:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:51.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:27:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:52.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:52 np0005475493 nova_compute[262220]: 2025-10-08 10:27:52.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:27:53 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1303: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct  8 06:27:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:53.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:53 np0005475493 nova_compute[262220]: 2025-10-08 10:27:53.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:27:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:27:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:27:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:27:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:27:54 np0005475493 nova_compute[262220]: 2025-10-08 10:27:54.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:27:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:27:54 np0005475493 podman[298408]: 2025-10-08 10:27:54.775894744 +0000 UTC m=+0.070465108 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  8 06:27:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:27:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:54.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:27:55 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1304: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:27:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:55.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:27:55] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct  8 06:27:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:27:55] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct  8 06:27:55 np0005475493 nova_compute[262220]: 2025-10-08 10:27:55.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:27:55 np0005475493 nova_compute[262220]: 2025-10-08 10:27:55.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  8 06:27:55 np0005475493 nova_compute[262220]: 2025-10-08 10:27:55.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  8 06:27:55 np0005475493 nova_compute[262220]: 2025-10-08 10:27:55.900 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  8 06:27:56 np0005475493 nova_compute[262220]: 2025-10-08 10:27:56.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:27:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:56.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:56 np0005475493 nova_compute[262220]: 2025-10-08 10:27:56.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:27:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:57.254Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:27:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:27:57.427 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:27:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:27:57.427 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:27:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:27:57.427 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:27:57 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1305: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:27:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:57.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:57 np0005475493 nova_compute[262220]: 2025-10-08 10:27:57.881 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:27:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:27:58.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:58 np0005475493 nova_compute[262220]: 2025-10-08 10:27:58.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:27:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:27:58.898Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:27:58 np0005475493 nova_compute[262220]: 2025-10-08 10:27:58.910 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:27:58 np0005475493 nova_compute[262220]: 2025-10-08 10:27:58.911 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:27:58 np0005475493 nova_compute[262220]: 2025-10-08 10:27:58.911 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:27:58 np0005475493 nova_compute[262220]: 2025-10-08 10:27:58.911 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:27:58 np0005475493 nova_compute[262220]: 2025-10-08 10:27:58.912 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:27:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:27:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:27:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:27:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:27:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:27:59 np0005475493 nova_compute[262220]: 2025-10-08 10:27:59.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:27:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:27:59 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3787316052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:27:59 np0005475493 nova_compute[262220]: 2025-10-08 10:27:59.417 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:27:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:27:59 np0005475493 nova_compute[262220]: 2025-10-08 10:27:59.557 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:27:59 np0005475493 nova_compute[262220]: 2025-10-08 10:27:59.558 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4437MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:27:59 np0005475493 nova_compute[262220]: 2025-10-08 10:27:59.558 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:27:59 np0005475493 nova_compute[262220]: 2025-10-08 10:27:59.558 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:27:59 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1306: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:27:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:27:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:27:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:27:59.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:27:59 np0005475493 nova_compute[262220]: 2025-10-08 10:27:59.810 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:27:59 np0005475493 nova_compute[262220]: 2025-10-08 10:27:59.810 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:27:59 np0005475493 nova_compute[262220]: 2025-10-08 10:27:59.831 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:28:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:28:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/925374754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:28:00 np0005475493 nova_compute[262220]: 2025-10-08 10:28:00.256 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:28:00 np0005475493 nova_compute[262220]: 2025-10-08 10:28:00.261 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:28:00 np0005475493 nova_compute[262220]: 2025-10-08 10:28:00.290 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:28:00 np0005475493 nova_compute[262220]: 2025-10-08 10:28:00.292 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:28:00 np0005475493 nova_compute[262220]: 2025-10-08 10:28:00.292 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:28:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:00.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:01 np0005475493 nova_compute[262220]: 2025-10-08 10:28:01.293 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:28:01 np0005475493 nova_compute[262220]: 2025-10-08 10:28:01.293 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:28:01 np0005475493 nova_compute[262220]: 2025-10-08 10:28:01.294 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  8 06:28:01 np0005475493 nova_compute[262220]: 2025-10-08 10:28:01.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:28:01 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1307: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:28:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:28:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:01.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:28:02 np0005475493 podman[298486]: 2025-10-08 10:28:02.317833035 +0000 UTC m=+0.070868080 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  8 06:28:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:02.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:28:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:28:03 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1308: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:28:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:03.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:28:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:28:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:28:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:28:04 np0005475493 nova_compute[262220]: 2025-10-08 10:28:04.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:28:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:28:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:28:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:04.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:28:05 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1309: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:28:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:05.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:28:05] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:28:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:28:05] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:28:05 np0005475493 nova_compute[262220]: 2025-10-08 10:28:05.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:28:06 np0005475493 podman[298519]: 2025-10-08 10:28:06.202897864 +0000 UTC m=+0.071934876 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  8 06:28:06 np0005475493 podman[298520]: 2025-10-08 10:28:06.219222263 +0000 UTC m=+0.083977946 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  8 06:28:06 np0005475493 nova_compute[262220]: 2025-10-08 10:28:06.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:28:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:06.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:07.255Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:28:07 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1310: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:28:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:07.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:08.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:08.899Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:28:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:08.899Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:28:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:28:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:28:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:28:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:28:09 np0005475493 nova_compute[262220]: 2025-10-08 10:28:09.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:28:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=cleanup t=2025-10-08T10:28:09.471085884Z level=info msg="Completed cleanup jobs" duration=35.47843ms
Oct  8 06:28:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:28:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=plugins.update.checker t=2025-10-08T10:28:09.569023472Z level=info msg="Update check succeeded" duration=56.791982ms
Oct  8 06:28:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-grafana-compute-0[103738]: logger=grafana.update.checker t=2025-10-08T10:28:09.569073124Z level=info msg="Update check succeeded" duration=56.236505ms
Oct  8 06:28:09 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1311: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:28:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:09.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:10.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:11 np0005475493 nova_compute[262220]: 2025-10-08 10:28:11.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:28:11 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1312: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:28:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:11.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:12.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:13 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1313: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:28:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:13.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:28:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:28:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:28:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:28:14 np0005475493 nova_compute[262220]: 2025-10-08 10:28:14.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:28:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:28:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:28:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:14.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:28:15 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1314: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:28:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:15.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:28:15] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:28:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:28:15] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:28:16 np0005475493 nova_compute[262220]: 2025-10-08 10:28:16.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:28:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:16.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:17.255Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:28:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:17.256Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:28:17 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1315: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:28:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:17.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:28:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:28:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:28:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:28:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:28:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:28:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:28:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:28:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:18.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:18.900Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:28:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:28:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:28:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:28:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:28:19 np0005475493 nova_compute[262220]: 2025-10-08 10:28:19.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:28:19 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1316: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:28:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:28:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:19.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.808537) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919299808573, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1370, "num_deletes": 251, "total_data_size": 2498152, "memory_usage": 2528808, "flush_reason": "Manual Compaction"}
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919299822217, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 2442536, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35622, "largest_seqno": 36991, "table_properties": {"data_size": 2436177, "index_size": 3558, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13584, "raw_average_key_size": 20, "raw_value_size": 2423354, "raw_average_value_size": 3574, "num_data_blocks": 156, "num_entries": 678, "num_filter_entries": 678, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759919169, "oldest_key_time": 1759919169, "file_creation_time": 1759919299, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 13755 microseconds, and 4918 cpu microseconds.
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.822298) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 2442536 bytes OK
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.822319) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.824852) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.824866) EVENT_LOG_v1 {"time_micros": 1759919299824862, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.824884) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 2492247, prev total WAL file size 2492247, number of live WAL files 2.
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.825499) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(2385KB)], [77(11MB)]
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919299825552, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 14624212, "oldest_snapshot_seqno": -1}
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6684 keys, 12528018 bytes, temperature: kUnknown
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919299915945, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 12528018, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12486230, "index_size": 23948, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 175678, "raw_average_key_size": 26, "raw_value_size": 12368654, "raw_average_value_size": 1850, "num_data_blocks": 937, "num_entries": 6684, "num_filter_entries": 6684, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759919299, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.917305) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12528018 bytes
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.918554) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.7 rd, 138.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 11.6 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(11.1) write-amplify(5.1) OK, records in: 7200, records dropped: 516 output_compression: NoCompression
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.918580) EVENT_LOG_v1 {"time_micros": 1759919299918571, "job": 44, "event": "compaction_finished", "compaction_time_micros": 90451, "compaction_time_cpu_micros": 35383, "output_level": 6, "num_output_files": 1, "total_output_size": 12528018, "num_input_records": 7200, "num_output_records": 6684, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919299919055, "job": 44, "event": "table_file_deletion", "file_number": 79}
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919299920795, "job": 44, "event": "table_file_deletion", "file_number": 77}
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.825409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.920880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.920885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.920887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.920888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:28:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:28:19.920890) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:28:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:28:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:20.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:28:21 np0005475493 nova_compute[262220]: 2025-10-08 10:28:21.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:28:21 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1317: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:28:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:21.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:22.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:23 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1318: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:28:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:23.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:28:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:28:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:28:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:28:24 np0005475493 nova_compute[262220]: 2025-10-08 10:28:24.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:28:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:28:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:24.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:24 np0005475493 podman[298602]: 2025-10-08 10:28:24.893547086 +0000 UTC m=+0.052689561 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct  8 06:28:25 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1319: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:28:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:25.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:28:25] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct  8 06:28:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:28:25] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct  8 06:28:26 np0005475493 nova_compute[262220]: 2025-10-08 10:28:26.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:28:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:26.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:27.257Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:28:27 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1320: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:28:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:27.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:28.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:28.901Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:28:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:28:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:28:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:28:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:28:29 np0005475493 nova_compute[262220]: 2025-10-08 10:28:29.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:28:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:28:29 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1321: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:28:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:29.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:30.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:31 np0005475493 nova_compute[262220]: 2025-10-08 10:28:31.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:28:31 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1322: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:28:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:28:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:31.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:28:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:28:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:28:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:32.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:32 np0005475493 podman[298657]: 2025-10-08 10:28:32.924738073 +0000 UTC m=+0.084816034 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct  8 06:28:33 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1323: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:28:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:33.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:28:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:28:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:28:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:28:34 np0005475493 nova_compute[262220]: 2025-10-08 10:28:34.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:28:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:28:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:28:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:34.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:28:35 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1324: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:28:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:35.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:28:35] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct  8 06:28:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:28:35] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct  8 06:28:36 np0005475493 nova_compute[262220]: 2025-10-08 10:28:36.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:28:36 np0005475493 podman[298689]: 2025-10-08 10:28:36.894797188 +0000 UTC m=+0.054349035 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  8 06:28:36 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:36 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:36 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:36.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:36 np0005475493 podman[298690]: 2025-10-08 10:28:36.913836676 +0000 UTC m=+0.059447110 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  8 06:28:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:37.258Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:28:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:37.259Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:28:37 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1325: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:28:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:28:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:37.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:28:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:38.902Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:28:38 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:38 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:38 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:38.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:28:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:28:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:28:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:28:39 np0005475493 nova_compute[262220]: 2025-10-08 10:28:39.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:28:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:28:39 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1326: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:28:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:39.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:40 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:40 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:40 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:40.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:41 np0005475493 nova_compute[262220]: 2025-10-08 10:28:41.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:28:41 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1327: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:28:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:28:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:41.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:28:42 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:42 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:28:42 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:42.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:28:43 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1328: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:28:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:43.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:28:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:28:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:28:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:28:44 np0005475493 nova_compute[262220]: 2025-10-08 10:28:44.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:28:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:28:44 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:44 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:28:44 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:44.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:28:45 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1329: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:28:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:45.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:28:45] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct  8 06:28:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:28:45] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct  8 06:28:46 np0005475493 nova_compute[262220]: 2025-10-08 10:28:46.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:28:46 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:46 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:28:46 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:46.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:28:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:47.260Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:28:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:47.260Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:28:47 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1330: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:28:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:47.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:28:47
Oct  8 06:28:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:28:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:28:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['volumes', 'vms', 'default.rgw.log', 'images', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', '.nfs', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr']
Oct  8 06:28:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:28:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:28:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:28:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:28:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:28:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:28:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:48.903Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:28:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct  8 06:28:48 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  8 06:28:48 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:48 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:28:48 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:48.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:28:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:28:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:28:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:28:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:28:49 np0005475493 nova_compute[262220]: 2025-10-08 10:28:49.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:28:49 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  8 06:28:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:28:49 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1331: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:28:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:49.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:50 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  8 06:28:50 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:28:50 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  8 06:28:50 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:50 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:50 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:50.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:50 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:28:51 np0005475493 nova_compute[262220]: 2025-10-08 10:28:51.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:28:51 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1332: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:28:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:51.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct  8 06:28:51 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  8 06:28:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  8 06:28:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:28:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  8 06:28:52 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:28:52 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:28:52 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  8 06:28:52 np0005475493 nova_compute[262220]: 2025-10-08 10:28:52.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:28:52 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:52 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:28:52 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:52.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:28:53 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1333: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:28:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:53.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:53 np0005475493 nova_compute[262220]: 2025-10-08 10:28:53.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:28:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:28:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:28:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:28:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:28:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:28:54 np0005475493 nova_compute[262220]: 2025-10-08 10:28:54.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:28:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:28:54 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:28:54 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:54 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:54 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:54.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct  8 06:28:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  8 06:28:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:28:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:28:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:28:54 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:28:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:28:54 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1334: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Oct  8 06:28:55 np0005475493 podman[298857]: 2025-10-08 10:28:55.111253132 +0000 UTC m=+0.108595806 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  8 06:28:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:28:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:28:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:55.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:28:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:28:55] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct  8 06:28:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:28:55] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct  8 06:28:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:28:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:28:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:28:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:28:55 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:28:55 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:28:55 np0005475493 nova_compute[262220]: 2025-10-08 10:28:55.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:28:55 np0005475493 nova_compute[262220]: 2025-10-08 10:28:55.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  8 06:28:55 np0005475493 nova_compute[262220]: 2025-10-08 10:28:55.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  8 06:28:55 np0005475493 nova_compute[262220]: 2025-10-08 10:28:55.903 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  8 06:28:56 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:28:56 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  8 06:28:56 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:28:56 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:28:56 np0005475493 nova_compute[262220]: 2025-10-08 10:28:56.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:28:56 np0005475493 podman[298972]: 2025-10-08 10:28:56.516224732 +0000 UTC m=+0.038258642 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:28:56 np0005475493 podman[298972]: 2025-10-08 10:28:56.850721996 +0000 UTC m=+0.372755846 container create ceac3142b5f5c5998fd2844342ac206dff1d308aee9394551af206319bae04dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_sammet, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:28:56 np0005475493 nova_compute[262220]: 2025-10-08 10:28:56.898 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:28:56 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:56 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:56 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:56.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:56 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1335: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  8 06:28:57 np0005475493 systemd[1]: Started libpod-conmon-ceac3142b5f5c5998fd2844342ac206dff1d308aee9394551af206319bae04dc.scope.
Oct  8 06:28:57 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:28:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:57.261Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:28:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:57.261Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:28:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:57.262Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:28:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:28:57.428 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:28:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:28:57.430 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:28:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:28:57.430 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:28:57 np0005475493 podman[298972]: 2025-10-08 10:28:57.434912494 +0000 UTC m=+0.956946324 container init ceac3142b5f5c5998fd2844342ac206dff1d308aee9394551af206319bae04dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_sammet, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:28:57 np0005475493 podman[298972]: 2025-10-08 10:28:57.443078629 +0000 UTC m=+0.965112479 container start ceac3142b5f5c5998fd2844342ac206dff1d308aee9394551af206319bae04dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_sammet, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:28:57 np0005475493 zealous_sammet[298988]: 167 167
Oct  8 06:28:57 np0005475493 systemd[1]: libpod-ceac3142b5f5c5998fd2844342ac206dff1d308aee9394551af206319bae04dc.scope: Deactivated successfully.
Oct  8 06:28:57 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:28:57 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:28:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:57.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:57 np0005475493 podman[298972]: 2025-10-08 10:28:57.659784219 +0000 UTC m=+1.181818139 container attach ceac3142b5f5c5998fd2844342ac206dff1d308aee9394551af206319bae04dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_sammet, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  8 06:28:57 np0005475493 podman[298972]: 2025-10-08 10:28:57.660522724 +0000 UTC m=+1.182556574 container died ceac3142b5f5c5998fd2844342ac206dff1d308aee9394551af206319bae04dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  8 06:28:58 np0005475493 systemd[1]: var-lib-containers-storage-overlay-2168519ec43c6cba2f9cee37d7d6468f05af415f19e02204e732202c7cc9de6d-merged.mount: Deactivated successfully.
Oct  8 06:28:58 np0005475493 podman[298972]: 2025-10-08 10:28:58.802692157 +0000 UTC m=+2.324725967 container remove ceac3142b5f5c5998fd2844342ac206dff1d308aee9394551af206319bae04dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_sammet, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:28:58 np0005475493 systemd[1]: libpod-conmon-ceac3142b5f5c5998fd2844342ac206dff1d308aee9394551af206319bae04dc.scope: Deactivated successfully.
Oct  8 06:28:58 np0005475493 nova_compute[262220]: 2025-10-08 10:28:58.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:28:58 np0005475493 nova_compute[262220]: 2025-10-08 10:28:58.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:28:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:28:58.903Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:28:58 np0005475493 nova_compute[262220]: 2025-10-08 10:28:58.913 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:28:58 np0005475493 nova_compute[262220]: 2025-10-08 10:28:58.914 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:28:58 np0005475493 nova_compute[262220]: 2025-10-08 10:28:58.914 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:28:58 np0005475493 nova_compute[262220]: 2025-10-08 10:28:58.914 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:28:58 np0005475493 nova_compute[262220]: 2025-10-08 10:28:58.914 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:28:58 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:58 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:28:58 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:28:58.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:28:58 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1336: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Oct  8 06:28:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:28:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:28:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:28:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:28:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:28:59 np0005475493 podman[299014]: 2025-10-08 10:28:58.941319706 +0000 UTC m=+0.024772656 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:28:59 np0005475493 podman[299014]: 2025-10-08 10:28:59.161668736 +0000 UTC m=+0.245121636 container create 97f7178627145bc594939b2ef24951dadc15b1d122361a8b9355067efdbee3d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mccarthy, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  8 06:28:59 np0005475493 nova_compute[262220]: 2025-10-08 10:28:59.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:28:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:28:59 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3200628335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:28:59 np0005475493 systemd[1]: Started libpod-conmon-97f7178627145bc594939b2ef24951dadc15b1d122361a8b9355067efdbee3d7.scope.
Oct  8 06:28:59 np0005475493 nova_compute[262220]: 2025-10-08 10:28:59.410 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:28:59 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:28:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40ed76bd35235059f93ff30fcfcbd64a816daec159cf3cfaa918b70af1c42dc0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:28:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40ed76bd35235059f93ff30fcfcbd64a816daec159cf3cfaa918b70af1c42dc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:28:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40ed76bd35235059f93ff30fcfcbd64a816daec159cf3cfaa918b70af1c42dc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:28:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40ed76bd35235059f93ff30fcfcbd64a816daec159cf3cfaa918b70af1c42dc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:28:59 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40ed76bd35235059f93ff30fcfcbd64a816daec159cf3cfaa918b70af1c42dc0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:28:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:28:59 np0005475493 podman[299014]: 2025-10-08 10:28:59.578600655 +0000 UTC m=+0.662053575 container init 97f7178627145bc594939b2ef24951dadc15b1d122361a8b9355067efdbee3d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:28:59 np0005475493 podman[299014]: 2025-10-08 10:28:59.585724846 +0000 UTC m=+0.669177736 container start 97f7178627145bc594939b2ef24951dadc15b1d122361a8b9355067efdbee3d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mccarthy, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  8 06:28:59 np0005475493 nova_compute[262220]: 2025-10-08 10:28:59.628 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:28:59 np0005475493 nova_compute[262220]: 2025-10-08 10:28:59.631 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4481MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:28:59 np0005475493 nova_compute[262220]: 2025-10-08 10:28:59.632 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:28:59 np0005475493 nova_compute[262220]: 2025-10-08 10:28:59.632 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:28:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:28:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:28:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:28:59.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:28:59 np0005475493 nova_compute[262220]: 2025-10-08 10:28:59.697 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:28:59 np0005475493 nova_compute[262220]: 2025-10-08 10:28:59.697 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:28:59 np0005475493 podman[299014]: 2025-10-08 10:28:59.714863406 +0000 UTC m=+0.798316326 container attach 97f7178627145bc594939b2ef24951dadc15b1d122361a8b9355067efdbee3d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mccarthy, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  8 06:28:59 np0005475493 nova_compute[262220]: 2025-10-08 10:28:59.716 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:28:59 np0005475493 eager_mccarthy[299055]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:28:59 np0005475493 eager_mccarthy[299055]: --> All data devices are unavailable
Oct  8 06:28:59 np0005475493 systemd[1]: libpod-97f7178627145bc594939b2ef24951dadc15b1d122361a8b9355067efdbee3d7.scope: Deactivated successfully.
Oct  8 06:28:59 np0005475493 podman[299014]: 2025-10-08 10:28:59.95409527 +0000 UTC m=+1.037548190 container died 97f7178627145bc594939b2ef24951dadc15b1d122361a8b9355067efdbee3d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mccarthy, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  8 06:29:00 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:29:00 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/812258560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:29:00 np0005475493 nova_compute[262220]: 2025-10-08 10:29:00.258 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:29:00 np0005475493 nova_compute[262220]: 2025-10-08 10:29:00.270 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:29:00 np0005475493 nova_compute[262220]: 2025-10-08 10:29:00.301 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:29:00 np0005475493 nova_compute[262220]: 2025-10-08 10:29:00.306 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:29:00 np0005475493 nova_compute[262220]: 2025-10-08 10:29:00.307 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:29:00 np0005475493 systemd[1]: var-lib-containers-storage-overlay-40ed76bd35235059f93ff30fcfcbd64a816daec159cf3cfaa918b70af1c42dc0-merged.mount: Deactivated successfully.
Oct  8 06:29:00 np0005475493 podman[299014]: 2025-10-08 10:29:00.798266733 +0000 UTC m=+1.881719623 container remove 97f7178627145bc594939b2ef24951dadc15b1d122361a8b9355067efdbee3d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mccarthy, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:29:00 np0005475493 systemd[1]: libpod-conmon-97f7178627145bc594939b2ef24951dadc15b1d122361a8b9355067efdbee3d7.scope: Deactivated successfully.
Oct  8 06:29:00 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:00 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:00 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:00.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:00 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1337: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  8 06:29:01 np0005475493 nova_compute[262220]: 2025-10-08 10:29:01.306 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:29:01 np0005475493 nova_compute[262220]: 2025-10-08 10:29:01.308 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:29:01 np0005475493 nova_compute[262220]: 2025-10-08 10:29:01.308 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:29:01 np0005475493 nova_compute[262220]: 2025-10-08 10:29:01.309 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  8 06:29:01 np0005475493 podman[299198]: 2025-10-08 10:29:01.420311048 +0000 UTC m=+0.023249426 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:29:01 np0005475493 podman[299198]: 2025-10-08 10:29:01.529275353 +0000 UTC m=+0.132213711 container create 4952df5a9822d3a7bff33fe93646892ca69ef41279995e80435f2029a827eed8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Oct  8 06:29:01 np0005475493 nova_compute[262220]: 2025-10-08 10:29:01.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:29:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:01.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:01 np0005475493 systemd[1]: Started libpod-conmon-4952df5a9822d3a7bff33fe93646892ca69ef41279995e80435f2029a827eed8.scope.
Oct  8 06:29:01 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:29:01 np0005475493 podman[299198]: 2025-10-08 10:29:01.866572339 +0000 UTC m=+0.469510717 container init 4952df5a9822d3a7bff33fe93646892ca69ef41279995e80435f2029a827eed8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  8 06:29:01 np0005475493 podman[299198]: 2025-10-08 10:29:01.875956702 +0000 UTC m=+0.478895090 container start 4952df5a9822d3a7bff33fe93646892ca69ef41279995e80435f2029a827eed8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_carson, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:29:01 np0005475493 competent_carson[299214]: 167 167
Oct  8 06:29:01 np0005475493 systemd[1]: libpod-4952df5a9822d3a7bff33fe93646892ca69ef41279995e80435f2029a827eed8.scope: Deactivated successfully.
Oct  8 06:29:01 np0005475493 podman[299198]: 2025-10-08 10:29:01.899613091 +0000 UTC m=+0.502551489 container attach 4952df5a9822d3a7bff33fe93646892ca69ef41279995e80435f2029a827eed8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  8 06:29:01 np0005475493 podman[299198]: 2025-10-08 10:29:01.900225461 +0000 UTC m=+0.503163839 container died 4952df5a9822d3a7bff33fe93646892ca69ef41279995e80435f2029a827eed8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_carson, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  8 06:29:02 np0005475493 systemd[1]: var-lib-containers-storage-overlay-d68361f17fc5ccc827a3203d3374739968927eb032e090682d3c606211123eb9-merged.mount: Deactivated successfully.
Oct  8 06:29:02 np0005475493 podman[299198]: 2025-10-08 10:29:02.16337257 +0000 UTC m=+0.766310928 container remove 4952df5a9822d3a7bff33fe93646892ca69ef41279995e80435f2029a827eed8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_carson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:29:02 np0005475493 systemd[1]: libpod-conmon-4952df5a9822d3a7bff33fe93646892ca69ef41279995e80435f2029a827eed8.scope: Deactivated successfully.
Oct  8 06:29:02 np0005475493 podman[299243]: 2025-10-08 10:29:02.43922463 +0000 UTC m=+0.103357404 container create 8cd177beba76edccf526909df31a1c982c1bd44d2920a7b64ea1d4d4c76a1135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct  8 06:29:02 np0005475493 podman[299243]: 2025-10-08 10:29:02.378975296 +0000 UTC m=+0.043108090 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:29:02 np0005475493 systemd[1]: Started libpod-conmon-8cd177beba76edccf526909df31a1c982c1bd44d2920a7b64ea1d4d4c76a1135.scope.
Oct  8 06:29:02 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:29:02 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c118635e7596082c7be1bbe0502e9f48ab4103d5d3a92eeb004560355062106/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:29:02 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c118635e7596082c7be1bbe0502e9f48ab4103d5d3a92eeb004560355062106/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:29:02 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c118635e7596082c7be1bbe0502e9f48ab4103d5d3a92eeb004560355062106/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:29:02 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c118635e7596082c7be1bbe0502e9f48ab4103d5d3a92eeb004560355062106/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:29:02 np0005475493 podman[299243]: 2025-10-08 10:29:02.58498548 +0000 UTC m=+0.249118284 container init 8cd177beba76edccf526909df31a1c982c1bd44d2920a7b64ea1d4d4c76a1135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_borg, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  8 06:29:02 np0005475493 podman[299243]: 2025-10-08 10:29:02.59269587 +0000 UTC m=+0.256828634 container start 8cd177beba76edccf526909df31a1c982c1bd44d2920a7b64ea1d4d4c76a1135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_borg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:29:02 np0005475493 podman[299243]: 2025-10-08 10:29:02.618811178 +0000 UTC m=+0.282943942 container attach 8cd177beba76edccf526909df31a1c982c1bd44d2920a7b64ea1d4d4c76a1135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  8 06:29:02 np0005475493 busy_borg[299259]: {
Oct  8 06:29:02 np0005475493 busy_borg[299259]:    "1": [
Oct  8 06:29:02 np0005475493 busy_borg[299259]:        {
Oct  8 06:29:02 np0005475493 busy_borg[299259]:            "devices": [
Oct  8 06:29:02 np0005475493 busy_borg[299259]:                "/dev/loop3"
Oct  8 06:29:02 np0005475493 busy_borg[299259]:            ],
Oct  8 06:29:02 np0005475493 busy_borg[299259]:            "lv_name": "ceph_lv0",
Oct  8 06:29:02 np0005475493 busy_borg[299259]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:29:02 np0005475493 busy_borg[299259]:            "lv_size": "21470642176",
Oct  8 06:29:02 np0005475493 busy_borg[299259]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:29:02 np0005475493 busy_borg[299259]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:29:02 np0005475493 busy_borg[299259]:            "name": "ceph_lv0",
Oct  8 06:29:02 np0005475493 busy_borg[299259]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:29:02 np0005475493 busy_borg[299259]:            "tags": {
Oct  8 06:29:02 np0005475493 busy_borg[299259]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:29:02 np0005475493 busy_borg[299259]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:29:02 np0005475493 busy_borg[299259]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:29:02 np0005475493 busy_borg[299259]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:29:02 np0005475493 busy_borg[299259]:                "ceph.cluster_name": "ceph",
Oct  8 06:29:02 np0005475493 busy_borg[299259]:                "ceph.crush_device_class": "",
Oct  8 06:29:02 np0005475493 busy_borg[299259]:                "ceph.encrypted": "0",
Oct  8 06:29:02 np0005475493 busy_borg[299259]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:29:02 np0005475493 busy_borg[299259]:                "ceph.osd_id": "1",
Oct  8 06:29:02 np0005475493 busy_borg[299259]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:29:02 np0005475493 busy_borg[299259]:                "ceph.type": "block",
Oct  8 06:29:02 np0005475493 busy_borg[299259]:                "ceph.vdo": "0",
Oct  8 06:29:02 np0005475493 busy_borg[299259]:                "ceph.with_tpm": "0"
Oct  8 06:29:02 np0005475493 busy_borg[299259]:            },
Oct  8 06:29:02 np0005475493 busy_borg[299259]:            "type": "block",
Oct  8 06:29:02 np0005475493 busy_borg[299259]:            "vg_name": "ceph_vg0"
Oct  8 06:29:02 np0005475493 busy_borg[299259]:        }
Oct  8 06:29:02 np0005475493 busy_borg[299259]:    ]
Oct  8 06:29:02 np0005475493 busy_borg[299259]: }
Oct  8 06:29:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:29:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:29:02 np0005475493 systemd[1]: libpod-8cd177beba76edccf526909df31a1c982c1bd44d2920a7b64ea1d4d4c76a1135.scope: Deactivated successfully.
Oct  8 06:29:02 np0005475493 podman[299243]: 2025-10-08 10:29:02.928679333 +0000 UTC m=+0.592812087 container died 8cd177beba76edccf526909df31a1c982c1bd44d2920a7b64ea1d4d4c76a1135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:29:02 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:02 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:29:02 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:02.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:29:02 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1338: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  8 06:29:03 np0005475493 systemd[1]: var-lib-containers-storage-overlay-0c118635e7596082c7be1bbe0502e9f48ab4103d5d3a92eeb004560355062106-merged.mount: Deactivated successfully.
Oct  8 06:29:03 np0005475493 podman[299243]: 2025-10-08 10:29:03.138583894 +0000 UTC m=+0.802716658 container remove 8cd177beba76edccf526909df31a1c982c1bd44d2920a7b64ea1d4d4c76a1135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_borg, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:29:03 np0005475493 systemd[1]: libpod-conmon-8cd177beba76edccf526909df31a1c982c1bd44d2920a7b64ea1d4d4c76a1135.scope: Deactivated successfully.
Oct  8 06:29:03 np0005475493 podman[299279]: 2025-10-08 10:29:03.254153714 +0000 UTC m=+0.220447164 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct  8 06:29:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:03.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:03 np0005475493 podman[299397]: 2025-10-08 10:29:03.799243102 +0000 UTC m=+0.081555447 container create c8115a747135c6e811c979c60aee78e9b00613117ffaf14a1d8a430e17c9feb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_leavitt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct  8 06:29:03 np0005475493 podman[299397]: 2025-10-08 10:29:03.744517026 +0000 UTC m=+0.026829401 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:29:03 np0005475493 systemd[1]: Started libpod-conmon-c8115a747135c6e811c979c60aee78e9b00613117ffaf14a1d8a430e17c9feb6.scope.
Oct  8 06:29:03 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:29:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:29:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:29:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:03 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:29:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:29:04 np0005475493 podman[299397]: 2025-10-08 10:29:04.004066779 +0000 UTC m=+0.286379144 container init c8115a747135c6e811c979c60aee78e9b00613117ffaf14a1d8a430e17c9feb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_leavitt, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  8 06:29:04 np0005475493 podman[299397]: 2025-10-08 10:29:04.011741467 +0000 UTC m=+0.294053832 container start c8115a747135c6e811c979c60aee78e9b00613117ffaf14a1d8a430e17c9feb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_leavitt, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  8 06:29:04 np0005475493 zen_leavitt[299413]: 167 167
Oct  8 06:29:04 np0005475493 systemd[1]: libpod-c8115a747135c6e811c979c60aee78e9b00613117ffaf14a1d8a430e17c9feb6.scope: Deactivated successfully.
Oct  8 06:29:04 np0005475493 conmon[299413]: conmon c8115a747135c6e811c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c8115a747135c6e811c979c60aee78e9b00613117ffaf14a1d8a430e17c9feb6.scope/container/memory.events
Oct  8 06:29:04 np0005475493 podman[299397]: 2025-10-08 10:29:04.055356433 +0000 UTC m=+0.337668828 container attach c8115a747135c6e811c979c60aee78e9b00613117ffaf14a1d8a430e17c9feb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_leavitt, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct  8 06:29:04 np0005475493 podman[299397]: 2025-10-08 10:29:04.056987966 +0000 UTC m=+0.339300341 container died c8115a747135c6e811c979c60aee78e9b00613117ffaf14a1d8a430e17c9feb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:29:04 np0005475493 systemd[1]: var-lib-containers-storage-overlay-1a2fdb29e1cfc2ef52b78fc031ac9e99f9c0cda1766335aafd13f2f5c49127e1-merged.mount: Deactivated successfully.
Oct  8 06:29:04 np0005475493 nova_compute[262220]: 2025-10-08 10:29:04.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:29:04 np0005475493 podman[299397]: 2025-10-08 10:29:04.307545627 +0000 UTC m=+0.589858012 container remove c8115a747135c6e811c979c60aee78e9b00613117ffaf14a1d8a430e17c9feb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_leavitt, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct  8 06:29:04 np0005475493 systemd[1]: libpod-conmon-c8115a747135c6e811c979c60aee78e9b00613117ffaf14a1d8a430e17c9feb6.scope: Deactivated successfully.
Oct  8 06:29:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:29:04 np0005475493 podman[299438]: 2025-10-08 10:29:04.620672807 +0000 UTC m=+0.125400380 container create d7ff7ffe5d8101e40e71445440970e76b40ef6dbab9c32d1fdd123433d8908ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_aryabhata, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Oct  8 06:29:04 np0005475493 podman[299438]: 2025-10-08 10:29:04.538464229 +0000 UTC m=+0.043191812 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:29:04 np0005475493 systemd[1]: Started libpod-conmon-d7ff7ffe5d8101e40e71445440970e76b40ef6dbab9c32d1fdd123433d8908ef.scope.
Oct  8 06:29:04 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:29:04 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e6eda10f905cdf74d5e4b78145586c07dcd45de90d4dcf4e61ce0ea10b3fcd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:29:04 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e6eda10f905cdf74d5e4b78145586c07dcd45de90d4dcf4e61ce0ea10b3fcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:29:04 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e6eda10f905cdf74d5e4b78145586c07dcd45de90d4dcf4e61ce0ea10b3fcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:29:04 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e6eda10f905cdf74d5e4b78145586c07dcd45de90d4dcf4e61ce0ea10b3fcd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:29:04 np0005475493 podman[299438]: 2025-10-08 10:29:04.864508469 +0000 UTC m=+0.369236112 container init d7ff7ffe5d8101e40e71445440970e76b40ef6dbab9c32d1fdd123433d8908ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_aryabhata, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:29:04 np0005475493 podman[299438]: 2025-10-08 10:29:04.872636753 +0000 UTC m=+0.377364336 container start d7ff7ffe5d8101e40e71445440970e76b40ef6dbab9c32d1fdd123433d8908ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:29:04 np0005475493 podman[299438]: 2025-10-08 10:29:04.926479291 +0000 UTC m=+0.431206964 container attach d7ff7ffe5d8101e40e71445440970e76b40ef6dbab9c32d1fdd123433d8908ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:29:04 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:04 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:04 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:04.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:04 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1339: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Oct  8 06:29:05 np0005475493 lvm[299529]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:29:05 np0005475493 lvm[299529]: VG ceph_vg0 finished
Oct  8 06:29:05 np0005475493 brave_aryabhata[299454]: {}
Oct  8 06:29:05 np0005475493 podman[299438]: 2025-10-08 10:29:05.64173745 +0000 UTC m=+1.146465023 container died d7ff7ffe5d8101e40e71445440970e76b40ef6dbab9c32d1fdd123433d8908ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_aryabhata, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  8 06:29:05 np0005475493 systemd[1]: libpod-d7ff7ffe5d8101e40e71445440970e76b40ef6dbab9c32d1fdd123433d8908ef.scope: Deactivated successfully.
Oct  8 06:29:05 np0005475493 systemd[1]: libpod-d7ff7ffe5d8101e40e71445440970e76b40ef6dbab9c32d1fdd123433d8908ef.scope: Consumed 1.260s CPU time.
Oct  8 06:29:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:05.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:29:05] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct  8 06:29:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:29:05] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct  8 06:29:05 np0005475493 systemd[1]: var-lib-containers-storage-overlay-28e6eda10f905cdf74d5e4b78145586c07dcd45de90d4dcf4e61ce0ea10b3fcd-merged.mount: Deactivated successfully.
Oct  8 06:29:06 np0005475493 podman[299438]: 2025-10-08 10:29:06.046771984 +0000 UTC m=+1.551499567 container remove d7ff7ffe5d8101e40e71445440970e76b40ef6dbab9c32d1fdd123433d8908ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_aryabhata, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  8 06:29:06 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:29:06 np0005475493 systemd[1]: libpod-conmon-d7ff7ffe5d8101e40e71445440970e76b40ef6dbab9c32d1fdd123433d8908ef.scope: Deactivated successfully.
Oct  8 06:29:06 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:29:06 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:29:06 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:29:06 np0005475493 nova_compute[262220]: 2025-10-08 10:29:06.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:29:06 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:29:06 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:29:06 np0005475493 nova_compute[262220]: 2025-10-08 10:29:06.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:29:06 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:06 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:29:06 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:06.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:29:06 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1340: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:29:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:07.263Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:29:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:07.264Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:29:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:07.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:07 np0005475493 podman[299577]: 2025-10-08 10:29:07.814923209 +0000 UTC m=+0.064176784 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd)
Oct  8 06:29:07 np0005475493 podman[299578]: 2025-10-08 10:29:07.815028692 +0000 UTC m=+0.063803861 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Oct  8 06:29:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:08.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:29:08 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:08 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:08 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:08.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:08 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1341: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:29:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:29:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:29:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:29:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:29:09 np0005475493 nova_compute[262220]: 2025-10-08 10:29:09.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:29:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:29:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:09.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:10 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:10 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:29:10 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:10.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:29:10 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1342: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:29:11 np0005475493 nova_compute[262220]: 2025-10-08 10:29:11.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:29:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:29:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:11.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:29:12 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:12 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:12 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:12.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:12 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1343: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:29:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:13.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:29:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:29:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:29:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:29:14 np0005475493 nova_compute[262220]: 2025-10-08 10:29:14.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:29:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:29:14 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:14 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:14 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:14.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:14 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1344: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:29:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:15.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:29:15] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct  8 06:29:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:29:15] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct  8 06:29:16 np0005475493 nova_compute[262220]: 2025-10-08 10:29:16.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:29:16 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:16 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:16 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:16.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:16 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1345: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:29:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:17.264Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:29:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:17.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:29:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:29:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:29:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:29:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:29:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:29:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:29:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:29:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:18.905Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:29:18 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:18 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:29:18 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:18.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:29:18 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1346: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:29:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:29:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:29:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:29:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:29:19 np0005475493 nova_compute[262220]: 2025-10-08 10:29:19.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:29:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:29:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:29:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:19.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:29:20 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:20 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:20 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:20.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:20 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1347: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:29:21 np0005475493 nova_compute[262220]: 2025-10-08 10:29:21.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:29:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:21.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:22 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:22 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:22 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:22.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:22 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1348: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:29:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:23.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:29:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:29:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:29:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:29:24 np0005475493 nova_compute[262220]: 2025-10-08 10:29:24.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:29:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:29:24 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:24 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:29:24 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:24.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:29:24 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1349: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:29:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:25.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:29:25] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:29:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:29:25] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:29:25 np0005475493 podman[299667]: 2025-10-08 10:29:25.901195373 +0000 UTC m=+0.055674507 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3)
Oct  8 06:29:26 np0005475493 nova_compute[262220]: 2025-10-08 10:29:26.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:29:26 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:26 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:26 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:26.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:26 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1350: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:29:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:27.265Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:29:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:27.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:28.906Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:29:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:28.906Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:29:28 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:28 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:28 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:28.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:29:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:29:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:29:29 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1351: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:29:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:29:29 np0005475493 nova_compute[262220]: 2025-10-08 10:29:29.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:29:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:29:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:29.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:30 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:30 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:30 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:30.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:31 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1352: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:29:31 np0005475493 nova_compute[262220]: 2025-10-08 10:29:31.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:29:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:31.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:29:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:29:32 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:32 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:29:32 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:32.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:29:33 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1353: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:29:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:33.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:33 np0005475493 podman[299726]: 2025-10-08 10:29:33.728139052 +0000 UTC m=+0.113828994 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  8 06:29:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:29:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:29:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:29:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:29:34 np0005475493 nova_compute[262220]: 2025-10-08 10:29:34.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:29:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:29:34 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:34 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:34 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:34.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:35 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1354: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:29:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:35.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:29:35] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct  8 06:29:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:29:35] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct  8 06:29:36 np0005475493 nova_compute[262220]: 2025-10-08 10:29:36.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:29:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:37.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:37 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1355: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:29:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:37.265Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:29:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:37.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:38 np0005475493 podman[299759]: 2025-10-08 10:29:38.576801189 +0000 UTC m=+0.059369078 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  8 06:29:38 np0005475493 podman[299760]: 2025-10-08 10:29:38.593882713 +0000 UTC m=+0.071300445 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  8 06:29:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:38.906Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:29:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:29:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:29:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:29:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:29:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:29:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:39.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:29:39 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1356: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:29:39 np0005475493 nova_compute[262220]: 2025-10-08 10:29:39.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:29:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:29:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:39.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:41 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1357: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:29:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:29:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:41.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:29:41 np0005475493 nova_compute[262220]: 2025-10-08 10:29:41.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:29:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:41.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:43 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1358: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:29:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:43.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:43.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:29:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:29:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:43 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:29:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:29:44 np0005475493 nova_compute[262220]: 2025-10-08 10:29:44.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:29:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:29:45 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1359: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:29:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:45.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:45.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:29:45] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct  8 06:29:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:29:45] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct  8 06:29:46 np0005475493 nova_compute[262220]: 2025-10-08 10:29:46.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:29:47 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1360: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:29:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:47.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:47.266Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:29:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:29:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:47.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:29:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:29:47
Oct  8 06:29:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:29:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:29:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['volumes', 'vms', 'cephfs.cephfs.data', 'backups', 'images', '.nfs', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'default.rgw.meta']
Oct  8 06:29:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:29:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:29:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:29:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:29:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:29:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:29:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:48.908Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:29:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:29:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:29:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:29:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:29:49 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1361: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:29:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:49.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:49 np0005475493 nova_compute[262220]: 2025-10-08 10:29:49.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:29:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:29:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:49.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:51 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1362: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:29:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:51.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:51 np0005475493 nova_compute[262220]: 2025-10-08 10:29:51.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:29:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:29:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:51.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:29:53 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1363: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:29:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:53.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:53.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:29:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:29:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:53 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:29:54 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:54 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:29:54 np0005475493 nova_compute[262220]: 2025-10-08 10:29:54.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:29:54 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:29:54 np0005475493 nova_compute[262220]: 2025-10-08 10:29:54.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:29:55 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1364: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:29:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:55.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:55 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:55 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:55 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:55.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:55 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:29:55] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:29:55 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:29:55] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:29:55 np0005475493 nova_compute[262220]: 2025-10-08 10:29:55.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:29:56 np0005475493 podman[299847]: 2025-10-08 10:29:56.098890551 +0000 UTC m=+0.083235572 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  8 06:29:56 np0005475493 nova_compute[262220]: 2025-10-08 10:29:56.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:29:56 np0005475493 nova_compute[262220]: 2025-10-08 10:29:56.887 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:29:56 np0005475493 nova_compute[262220]: 2025-10-08 10:29:56.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  8 06:29:56 np0005475493 nova_compute[262220]: 2025-10-08 10:29:56.888 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  8 06:29:57 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1365: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:29:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:57.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:57.267Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:29:57 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:57.270Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:29:57 np0005475493 nova_compute[262220]: 2025-10-08 10:29:57.374 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  8 06:29:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:29:57.429 163175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:29:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:29:57.429 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:29:57 np0005475493 ovn_metadata_agent[163169]: 2025-10-08 10:29:57.429 163175 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:29:57 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:57 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:57 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:57.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:57 np0005475493 nova_compute[262220]: 2025-10-08 10:29:57.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:29:57 np0005475493 nova_compute[262220]: 2025-10-08 10:29:57.887 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  8 06:29:58 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:29:58.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:29:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:58 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:29:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:29:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:29:59 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:29:59 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:29:59 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1366: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:29:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:29:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:29:59.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:29:59 np0005475493 nova_compute[262220]: 2025-10-08 10:29:59.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:29:59 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:29:59 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:29:59 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:29:59 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:29:59.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:30:00 np0005475493 ceph-mon[73572]: log_channel(cluster) log [WRN] : overall HEALTH_WARN 1 failed cephadm daemon(s)
Oct  8 06:30:00 np0005475493 ceph-mon[73572]: overall HEALTH_WARN 1 failed cephadm daemon(s)
Oct  8 06:30:01 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1367: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:30:01 np0005475493 nova_compute[262220]: 2025-10-08 10:30:01.020 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:30:01 np0005475493 nova_compute[262220]: 2025-10-08 10:30:01.021 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:30:01 np0005475493 nova_compute[262220]: 2025-10-08 10:30:01.021 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:30:01 np0005475493 nova_compute[262220]: 2025-10-08 10:30:01.021 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  8 06:30:01 np0005475493 nova_compute[262220]: 2025-10-08 10:30:01.021 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:30:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:30:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:01.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:30:01 np0005475493 nova_compute[262220]: 2025-10-08 10:30:01.259 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:30:01 np0005475493 nova_compute[262220]: 2025-10-08 10:30:01.259 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:30:01 np0005475493 nova_compute[262220]: 2025-10-08 10:30:01.259 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:30:01 np0005475493 nova_compute[262220]: 2025-10-08 10:30:01.260 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  8 06:30:01 np0005475493 nova_compute[262220]: 2025-10-08 10:30:01.260 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:30:01 np0005475493 nova_compute[262220]: 2025-10-08 10:30:01.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:30:01 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:30:01 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3116006053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:30:01 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:01 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:01 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:01.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:01 np0005475493 nova_compute[262220]: 2025-10-08 10:30:01.719 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:30:01 np0005475493 nova_compute[262220]: 2025-10-08 10:30:01.890 2 WARNING nova.virt.libvirt.driver [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  8 06:30:01 np0005475493 nova_compute[262220]: 2025-10-08 10:30:01.892 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4524MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  8 06:30:01 np0005475493 nova_compute[262220]: 2025-10-08 10:30:01.892 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  8 06:30:01 np0005475493 nova_compute[262220]: 2025-10-08 10:30:01.892 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  8 06:30:02 np0005475493 nova_compute[262220]: 2025-10-08 10:30:02.239 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  8 06:30:02 np0005475493 nova_compute[262220]: 2025-10-08 10:30:02.240 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  8 06:30:02 np0005475493 nova_compute[262220]: 2025-10-08 10:30:02.399 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing inventories for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  8 06:30:02 np0005475493 nova_compute[262220]: 2025-10-08 10:30:02.582 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating ProviderTree inventory for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  8 06:30:02 np0005475493 nova_compute[262220]: 2025-10-08 10:30:02.582 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Updating inventory in ProviderTree for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  8 06:30:02 np0005475493 nova_compute[262220]: 2025-10-08 10:30:02.629 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing aggregate associations for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  8 06:30:02 np0005475493 nova_compute[262220]: 2025-10-08 10:30:02.649 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Refreshing trait associations for resource provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2, traits: HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,HW_CPU_X86_SSE2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  8 06:30:02 np0005475493 nova_compute[262220]: 2025-10-08 10:30:02.667 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  8 06:30:02 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:30:02 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:30:03 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1368: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:30:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:03.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:03 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  8 06:30:03 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2598333599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  8 06:30:03 np0005475493 nova_compute[262220]: 2025-10-08 10:30:03.137 2 DEBUG oslo_concurrency.processutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  8 06:30:03 np0005475493 nova_compute[262220]: 2025-10-08 10:30:03.144 2 DEBUG nova.compute.provider_tree [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed in ProviderTree for provider: 62e4b021-d3ae-43f9-883d-805e2c7d21a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  8 06:30:03 np0005475493 nova_compute[262220]: 2025-10-08 10:30:03.169 2 DEBUG nova.scheduler.client.report [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Inventory has not changed for provider 62e4b021-d3ae-43f9-883d-805e2c7d21a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  8 06:30:03 np0005475493 nova_compute[262220]: 2025-10-08 10:30:03.171 2 DEBUG nova.compute.resource_tracker [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  8 06:30:03 np0005475493 nova_compute[262220]: 2025-10-08 10:30:03.171 2 DEBUG oslo_concurrency.lockutils [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  8 06:30:03 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:03 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:30:03 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:03.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:30:03 np0005475493 podman[299922]: 2025-10-08 10:30:03.931871627 +0000 UTC m=+0.090382104 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  8 06:30:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:30:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:30:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:30:04 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:04 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:30:04 np0005475493 nova_compute[262220]: 2025-10-08 10:30:04.037 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:30:04 np0005475493 nova_compute[262220]: 2025-10-08 10:30:04.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:30:04 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:30:05 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1369: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:30:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:05.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:05 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:05 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:05 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:05.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:05 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:30:05] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct  8 06:30:05 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:30:05] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct  8 06:30:06 np0005475493 nova_compute[262220]: 2025-10-08 10:30:06.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:30:07 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1370: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:30:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:07.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:07 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:07.271Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:30:07 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:30:07 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:30:07 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  8 06:30:07 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:30:07 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1371: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  8 06:30:07 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  8 06:30:07 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:30:07 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  8 06:30:07 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:30:07 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  8 06:30:07 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  8 06:30:07 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  8 06:30:07 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:30:07 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:30:07 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:30:07 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:07 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:30:07 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:07.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:30:07 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  8 06:30:07 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:30:07 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:30:07 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  8 06:30:07 np0005475493 podman[300128]: 2025-10-08 10:30:07.917055693 +0000 UTC m=+0.046527481 container create d098acd2f27010e81bb06ced4f2d87fffa714b501601f81bd44318b5f5d8898f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  8 06:30:07 np0005475493 systemd[1]: Started libpod-conmon-d098acd2f27010e81bb06ced4f2d87fffa714b501601f81bd44318b5f5d8898f.scope.
Oct  8 06:30:07 np0005475493 podman[300128]: 2025-10-08 10:30:07.89724855 +0000 UTC m=+0.026720348 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:30:08 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:30:08 np0005475493 podman[300128]: 2025-10-08 10:30:08.024248831 +0000 UTC m=+0.153720629 container init d098acd2f27010e81bb06ced4f2d87fffa714b501601f81bd44318b5f5d8898f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:30:08 np0005475493 podman[300128]: 2025-10-08 10:30:08.034998551 +0000 UTC m=+0.164470369 container start d098acd2f27010e81bb06ced4f2d87fffa714b501601f81bd44318b5f5d8898f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  8 06:30:08 np0005475493 podman[300128]: 2025-10-08 10:30:08.03901522 +0000 UTC m=+0.168487028 container attach d098acd2f27010e81bb06ced4f2d87fffa714b501601f81bd44318b5f5d8898f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_nobel, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  8 06:30:08 np0005475493 inspiring_nobel[300144]: 167 167
Oct  8 06:30:08 np0005475493 systemd[1]: libpod-d098acd2f27010e81bb06ced4f2d87fffa714b501601f81bd44318b5f5d8898f.scope: Deactivated successfully.
Oct  8 06:30:08 np0005475493 podman[300128]: 2025-10-08 10:30:08.043799755 +0000 UTC m=+0.173271543 container died d098acd2f27010e81bb06ced4f2d87fffa714b501601f81bd44318b5f5d8898f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_nobel, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:30:08 np0005475493 systemd[1]: var-lib-containers-storage-overlay-3295e923f97dc2889ae0f1b34cbccba0f83c4cf7b8573df253f1bd3e4a3c113d-merged.mount: Deactivated successfully.
Oct  8 06:30:08 np0005475493 podman[300128]: 2025-10-08 10:30:08.087954689 +0000 UTC m=+0.217426477 container remove d098acd2f27010e81bb06ced4f2d87fffa714b501601f81bd44318b5f5d8898f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:30:08 np0005475493 systemd[1]: libpod-conmon-d098acd2f27010e81bb06ced4f2d87fffa714b501601f81bd44318b5f5d8898f.scope: Deactivated successfully.
Oct  8 06:30:08 np0005475493 podman[300170]: 2025-10-08 10:30:08.292684312 +0000 UTC m=+0.046180840 container create a449e64c6340a16fcc0c888322e9e369dee01def0617e07a31d2d9ff90060524 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_ptolemy, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:30:08 np0005475493 systemd[1]: Started libpod-conmon-a449e64c6340a16fcc0c888322e9e369dee01def0617e07a31d2d9ff90060524.scope.
Oct  8 06:30:08 np0005475493 podman[300170]: 2025-10-08 10:30:08.274068458 +0000 UTC m=+0.027565036 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:30:08 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:30:08 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a6e2a7dbdbc514f5696146d6b6450d357f951968b71509a420822c57456080c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:30:08 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a6e2a7dbdbc514f5696146d6b6450d357f951968b71509a420822c57456080c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:30:08 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a6e2a7dbdbc514f5696146d6b6450d357f951968b71509a420822c57456080c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:30:08 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a6e2a7dbdbc514f5696146d6b6450d357f951968b71509a420822c57456080c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:30:08 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a6e2a7dbdbc514f5696146d6b6450d357f951968b71509a420822c57456080c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  8 06:30:08 np0005475493 podman[300170]: 2025-10-08 10:30:08.384646016 +0000 UTC m=+0.138142544 container init a449e64c6340a16fcc0c888322e9e369dee01def0617e07a31d2d9ff90060524 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_ptolemy, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:30:08 np0005475493 podman[300170]: 2025-10-08 10:30:08.394977251 +0000 UTC m=+0.148473779 container start a449e64c6340a16fcc0c888322e9e369dee01def0617e07a31d2d9ff90060524 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:30:08 np0005475493 podman[300170]: 2025-10-08 10:30:08.398252648 +0000 UTC m=+0.151749206 container attach a449e64c6340a16fcc0c888322e9e369dee01def0617e07a31d2d9ff90060524 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_ptolemy, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct  8 06:30:08 np0005475493 zen_ptolemy[300186]: --> passed data devices: 0 physical, 1 LVM
Oct  8 06:30:08 np0005475493 zen_ptolemy[300186]: --> All data devices are unavailable
Oct  8 06:30:08 np0005475493 systemd[1]: libpod-a449e64c6340a16fcc0c888322e9e369dee01def0617e07a31d2d9ff90060524.scope: Deactivated successfully.
Oct  8 06:30:08 np0005475493 podman[300170]: 2025-10-08 10:30:08.730814649 +0000 UTC m=+0.484311217 container died a449e64c6340a16fcc0c888322e9e369dee01def0617e07a31d2d9ff90060524 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_ptolemy, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  8 06:30:08 np0005475493 systemd[1]: var-lib-containers-storage-overlay-1a6e2a7dbdbc514f5696146d6b6450d357f951968b71509a420822c57456080c-merged.mount: Deactivated successfully.
Oct  8 06:30:08 np0005475493 podman[300170]: 2025-10-08 10:30:08.792211131 +0000 UTC m=+0.545707669 container remove a449e64c6340a16fcc0c888322e9e369dee01def0617e07a31d2d9ff90060524 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:30:08 np0005475493 systemd[1]: libpod-conmon-a449e64c6340a16fcc0c888322e9e369dee01def0617e07a31d2d9ff90060524.scope: Deactivated successfully.
Oct  8 06:30:08 np0005475493 podman[300205]: 2025-10-08 10:30:08.837706138 +0000 UTC m=+0.064533005 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct  8 06:30:08 np0005475493 podman[300202]: 2025-10-08 10:30:08.84363487 +0000 UTC m=+0.083462039 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  8 06:30:08 np0005475493 nova_compute[262220]: 2025-10-08 10:30:08.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:30:08 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:08.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:30:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:30:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:30:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:08 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:30:09 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:09 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:30:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:09.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:09 np0005475493 nova_compute[262220]: 2025-10-08 10:30:09.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:30:09 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1372: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  8 06:30:09 np0005475493 podman[300339]: 2025-10-08 10:30:09.391349583 +0000 UTC m=+0.041698154 container create e6bb578b1d6167189d10c1039494f67adaab727e642ea3860d2b92efecd0468b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:30:09 np0005475493 systemd[1]: Started libpod-conmon-e6bb578b1d6167189d10c1039494f67adaab727e642ea3860d2b92efecd0468b.scope.
Oct  8 06:30:09 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:30:09 np0005475493 podman[300339]: 2025-10-08 10:30:09.45411378 +0000 UTC m=+0.104462351 container init e6bb578b1d6167189d10c1039494f67adaab727e642ea3860d2b92efecd0468b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:30:09 np0005475493 podman[300339]: 2025-10-08 10:30:09.459870097 +0000 UTC m=+0.110218668 container start e6bb578b1d6167189d10c1039494f67adaab727e642ea3860d2b92efecd0468b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:30:09 np0005475493 podman[300339]: 2025-10-08 10:30:09.464625851 +0000 UTC m=+0.114974452 container attach e6bb578b1d6167189d10c1039494f67adaab727e642ea3860d2b92efecd0468b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lamarr, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True)
Oct  8 06:30:09 np0005475493 naughty_lamarr[300356]: 167 167
Oct  8 06:30:09 np0005475493 systemd[1]: libpod-e6bb578b1d6167189d10c1039494f67adaab727e642ea3860d2b92efecd0468b.scope: Deactivated successfully.
Oct  8 06:30:09 np0005475493 podman[300339]: 2025-10-08 10:30:09.466627435 +0000 UTC m=+0.116976006 container died e6bb578b1d6167189d10c1039494f67adaab727e642ea3860d2b92efecd0468b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lamarr, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  8 06:30:09 np0005475493 podman[300339]: 2025-10-08 10:30:09.374719863 +0000 UTC m=+0.025068454 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:30:09 np0005475493 systemd[1]: var-lib-containers-storage-overlay-685c1d224828c200b2ec478fed76961c9fd557d5bf14932b09fb554f15f25a03-merged.mount: Deactivated successfully.
Oct  8 06:30:09 np0005475493 podman[300339]: 2025-10-08 10:30:09.504123173 +0000 UTC m=+0.154471744 container remove e6bb578b1d6167189d10c1039494f67adaab727e642ea3860d2b92efecd0468b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lamarr, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct  8 06:30:09 np0005475493 systemd[1]: libpod-conmon-e6bb578b1d6167189d10c1039494f67adaab727e642ea3860d2b92efecd0468b.scope: Deactivated successfully.
Oct  8 06:30:09 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:30:09 np0005475493 podman[300382]: 2025-10-08 10:30:09.689797637 +0000 UTC m=+0.048538605 container create 0e09e8d59d15a8b78092a6d5772dc618c02102dbfdc2008abac593a1aecbf79b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_pare, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct  8 06:30:09 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:09 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:09 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:09.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:09 np0005475493 systemd[1]: Started libpod-conmon-0e09e8d59d15a8b78092a6d5772dc618c02102dbfdc2008abac593a1aecbf79b.scope.
Oct  8 06:30:09 np0005475493 podman[300382]: 2025-10-08 10:30:09.667294067 +0000 UTC m=+0.026035095 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:30:09 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:30:09 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2df8e227a896d23638c0c760fce7667ae2276e216f53a6c6a846e74caf3bd965/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:30:09 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2df8e227a896d23638c0c760fce7667ae2276e216f53a6c6a846e74caf3bd965/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:30:09 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2df8e227a896d23638c0c760fce7667ae2276e216f53a6c6a846e74caf3bd965/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:30:09 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2df8e227a896d23638c0c760fce7667ae2276e216f53a6c6a846e74caf3bd965/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:30:09 np0005475493 podman[300382]: 2025-10-08 10:30:09.782181485 +0000 UTC m=+0.140922483 container init 0e09e8d59d15a8b78092a6d5772dc618c02102dbfdc2008abac593a1aecbf79b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_pare, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  8 06:30:09 np0005475493 podman[300382]: 2025-10-08 10:30:09.788617584 +0000 UTC m=+0.147358562 container start 0e09e8d59d15a8b78092a6d5772dc618c02102dbfdc2008abac593a1aecbf79b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_pare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:30:09 np0005475493 podman[300382]: 2025-10-08 10:30:09.793481242 +0000 UTC m=+0.152222230 container attach 0e09e8d59d15a8b78092a6d5772dc618c02102dbfdc2008abac593a1aecbf79b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_pare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:30:09 np0005475493 nova_compute[262220]: 2025-10-08 10:30:09.886 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:30:10 np0005475493 silly_pare[300398]: {
Oct  8 06:30:10 np0005475493 silly_pare[300398]:    "1": [
Oct  8 06:30:10 np0005475493 silly_pare[300398]:        {
Oct  8 06:30:10 np0005475493 silly_pare[300398]:            "devices": [
Oct  8 06:30:10 np0005475493 silly_pare[300398]:                "/dev/loop3"
Oct  8 06:30:10 np0005475493 silly_pare[300398]:            ],
Oct  8 06:30:10 np0005475493 silly_pare[300398]:            "lv_name": "ceph_lv0",
Oct  8 06:30:10 np0005475493 silly_pare[300398]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:30:10 np0005475493 silly_pare[300398]:            "lv_size": "21470642176",
Oct  8 06:30:10 np0005475493 silly_pare[300398]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=787292cc-8154-50c4-9e00-e9be3e817149,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=85fe3e7b-5e0f-4a19-934c-310215b2e933,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  8 06:30:10 np0005475493 silly_pare[300398]:            "lv_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:30:10 np0005475493 silly_pare[300398]:            "name": "ceph_lv0",
Oct  8 06:30:10 np0005475493 silly_pare[300398]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:30:10 np0005475493 silly_pare[300398]:            "tags": {
Oct  8 06:30:10 np0005475493 silly_pare[300398]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  8 06:30:10 np0005475493 silly_pare[300398]:                "ceph.block_uuid": "znmSQo-tekK-7afD-UXlG-e1DO-4ymI-GA63qj",
Oct  8 06:30:10 np0005475493 silly_pare[300398]:                "ceph.cephx_lockbox_secret": "",
Oct  8 06:30:10 np0005475493 silly_pare[300398]:                "ceph.cluster_fsid": "787292cc-8154-50c4-9e00-e9be3e817149",
Oct  8 06:30:10 np0005475493 silly_pare[300398]:                "ceph.cluster_name": "ceph",
Oct  8 06:30:10 np0005475493 silly_pare[300398]:                "ceph.crush_device_class": "",
Oct  8 06:30:10 np0005475493 silly_pare[300398]:                "ceph.encrypted": "0",
Oct  8 06:30:10 np0005475493 silly_pare[300398]:                "ceph.osd_fsid": "85fe3e7b-5e0f-4a19-934c-310215b2e933",
Oct  8 06:30:10 np0005475493 silly_pare[300398]:                "ceph.osd_id": "1",
Oct  8 06:30:10 np0005475493 silly_pare[300398]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  8 06:30:10 np0005475493 silly_pare[300398]:                "ceph.type": "block",
Oct  8 06:30:10 np0005475493 silly_pare[300398]:                "ceph.vdo": "0",
Oct  8 06:30:10 np0005475493 silly_pare[300398]:                "ceph.with_tpm": "0"
Oct  8 06:30:10 np0005475493 silly_pare[300398]:            },
Oct  8 06:30:10 np0005475493 silly_pare[300398]:            "type": "block",
Oct  8 06:30:10 np0005475493 silly_pare[300398]:            "vg_name": "ceph_vg0"
Oct  8 06:30:10 np0005475493 silly_pare[300398]:        }
Oct  8 06:30:10 np0005475493 silly_pare[300398]:    ]
Oct  8 06:30:10 np0005475493 silly_pare[300398]: }
Oct  8 06:30:10 np0005475493 systemd[1]: libpod-0e09e8d59d15a8b78092a6d5772dc618c02102dbfdc2008abac593a1aecbf79b.scope: Deactivated successfully.
Oct  8 06:30:10 np0005475493 podman[300382]: 2025-10-08 10:30:10.115243603 +0000 UTC m=+0.473984641 container died 0e09e8d59d15a8b78092a6d5772dc618c02102dbfdc2008abac593a1aecbf79b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid)
Oct  8 06:30:10 np0005475493 systemd[1]: var-lib-containers-storage-overlay-2df8e227a896d23638c0c760fce7667ae2276e216f53a6c6a846e74caf3bd965-merged.mount: Deactivated successfully.
Oct  8 06:30:10 np0005475493 podman[300382]: 2025-10-08 10:30:10.16814241 +0000 UTC m=+0.526883388 container remove 0e09e8d59d15a8b78092a6d5772dc618c02102dbfdc2008abac593a1aecbf79b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_pare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:30:10 np0005475493 systemd[1]: libpod-conmon-0e09e8d59d15a8b78092a6d5772dc618c02102dbfdc2008abac593a1aecbf79b.scope: Deactivated successfully.
Oct  8 06:30:10 np0005475493 podman[300511]: 2025-10-08 10:30:10.73242903 +0000 UTC m=+0.037121405 container create 738c2e4f6966399859fe9bf4901e85de92989a7e589f2ed77efbb34f73680e7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_lumiere, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Oct  8 06:30:10 np0005475493 systemd[1]: Started libpod-conmon-738c2e4f6966399859fe9bf4901e85de92989a7e589f2ed77efbb34f73680e7f.scope.
Oct  8 06:30:10 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:30:10 np0005475493 podman[300511]: 2025-10-08 10:30:10.716845355 +0000 UTC m=+0.021537750 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:30:10 np0005475493 podman[300511]: 2025-10-08 10:30:10.824373964 +0000 UTC m=+0.129066529 container init 738c2e4f6966399859fe9bf4901e85de92989a7e589f2ed77efbb34f73680e7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_lumiere, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  8 06:30:10 np0005475493 podman[300511]: 2025-10-08 10:30:10.833642965 +0000 UTC m=+0.138335330 container start 738c2e4f6966399859fe9bf4901e85de92989a7e589f2ed77efbb34f73680e7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  8 06:30:10 np0005475493 podman[300511]: 2025-10-08 10:30:10.837104608 +0000 UTC m=+0.141796983 container attach 738c2e4f6966399859fe9bf4901e85de92989a7e589f2ed77efbb34f73680e7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  8 06:30:10 np0005475493 zealous_lumiere[300525]: 167 167
Oct  8 06:30:10 np0005475493 systemd[1]: libpod-738c2e4f6966399859fe9bf4901e85de92989a7e589f2ed77efbb34f73680e7f.scope: Deactivated successfully.
Oct  8 06:30:10 np0005475493 podman[300511]: 2025-10-08 10:30:10.841176129 +0000 UTC m=+0.145868504 container died 738c2e4f6966399859fe9bf4901e85de92989a7e589f2ed77efbb34f73680e7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  8 06:30:10 np0005475493 systemd[1]: var-lib-containers-storage-overlay-b65c6f77ac492eaf6911505614378ec441239cb6ccc15719a57b43fb190b7088-merged.mount: Deactivated successfully.
Oct  8 06:30:10 np0005475493 podman[300511]: 2025-10-08 10:30:10.889081424 +0000 UTC m=+0.193773799 container remove 738c2e4f6966399859fe9bf4901e85de92989a7e589f2ed77efbb34f73680e7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  8 06:30:10 np0005475493 systemd[1]: libpod-conmon-738c2e4f6966399859fe9bf4901e85de92989a7e589f2ed77efbb34f73680e7f.scope: Deactivated successfully.
Oct  8 06:30:10 np0005475493 nova_compute[262220]: 2025-10-08 10:30:10.902 2 DEBUG oslo_service.periodic_task [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  8 06:30:10 np0005475493 nova_compute[262220]: 2025-10-08 10:30:10.904 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  8 06:30:11 np0005475493 nova_compute[262220]: 2025-10-08 10:30:11.015 2 DEBUG nova.compute.manager [None req-61a40f93-9e8a-4565-859a-a75490209035 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  8 06:30:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:11.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:11 np0005475493 podman[300553]: 2025-10-08 10:30:11.099715369 +0000 UTC m=+0.050970795 container create 5aeed10626f4b32e07c97221b0d5dd9e7d9d0adcb83bf07ca2929f424d8fe113 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bohr, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  8 06:30:11 np0005475493 systemd[1]: Started libpod-conmon-5aeed10626f4b32e07c97221b0d5dd9e7d9d0adcb83bf07ca2929f424d8fe113.scope.
Oct  8 06:30:11 np0005475493 systemd[1]: Started libcrun container.
Oct  8 06:30:11 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ab9eb17c89b3c64ae678196cee872fc5188e32f7b635cda2bff7af1fbd9f64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  8 06:30:11 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ab9eb17c89b3c64ae678196cee872fc5188e32f7b635cda2bff7af1fbd9f64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  8 06:30:11 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ab9eb17c89b3c64ae678196cee872fc5188e32f7b635cda2bff7af1fbd9f64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  8 06:30:11 np0005475493 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ab9eb17c89b3c64ae678196cee872fc5188e32f7b635cda2bff7af1fbd9f64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  8 06:30:11 np0005475493 podman[300553]: 2025-10-08 10:30:11.08464681 +0000 UTC m=+0.035902276 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  8 06:30:11 np0005475493 podman[300553]: 2025-10-08 10:30:11.17928588 +0000 UTC m=+0.130541386 container init 5aeed10626f4b32e07c97221b0d5dd9e7d9d0adcb83bf07ca2929f424d8fe113 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  8 06:30:11 np0005475493 podman[300553]: 2025-10-08 10:30:11.190520585 +0000 UTC m=+0.141776011 container start 5aeed10626f4b32e07c97221b0d5dd9e7d9d0adcb83bf07ca2929f424d8fe113 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bohr, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  8 06:30:11 np0005475493 podman[300553]: 2025-10-08 10:30:11.195954741 +0000 UTC m=+0.147210247 container attach 5aeed10626f4b32e07c97221b0d5dd9e7d9d0adcb83bf07ca2929f424d8fe113 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  8 06:30:11 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1373: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  8 06:30:11 np0005475493 nova_compute[262220]: 2025-10-08 10:30:11.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:30:11 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:11 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:11 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:11.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:11 np0005475493 lvm[300646]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:30:11 np0005475493 lvm[300646]: VG ceph_vg0 finished
Oct  8 06:30:11 np0005475493 gracious_bohr[300571]: {}
Oct  8 06:30:11 np0005475493 systemd[1]: libpod-5aeed10626f4b32e07c97221b0d5dd9e7d9d0adcb83bf07ca2929f424d8fe113.scope: Deactivated successfully.
Oct  8 06:30:11 np0005475493 systemd[1]: libpod-5aeed10626f4b32e07c97221b0d5dd9e7d9d0adcb83bf07ca2929f424d8fe113.scope: Consumed 1.213s CPU time.
Oct  8 06:30:11 np0005475493 podman[300553]: 2025-10-08 10:30:11.931347175 +0000 UTC m=+0.882602681 container died 5aeed10626f4b32e07c97221b0d5dd9e7d9d0adcb83bf07ca2929f424d8fe113 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bohr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct  8 06:30:11 np0005475493 systemd[1]: var-lib-containers-storage-overlay-49ab9eb17c89b3c64ae678196cee872fc5188e32f7b635cda2bff7af1fbd9f64-merged.mount: Deactivated successfully.
Oct  8 06:30:11 np0005475493 podman[300553]: 2025-10-08 10:30:11.990153663 +0000 UTC m=+0.941409079 container remove 5aeed10626f4b32e07c97221b0d5dd9e7d9d0adcb83bf07ca2929f424d8fe113 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  8 06:30:12 np0005475493 systemd[1]: libpod-conmon-5aeed10626f4b32e07c97221b0d5dd9e7d9d0adcb83bf07ca2929f424d8fe113.scope: Deactivated successfully.
Oct  8 06:30:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  8 06:30:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:30:12 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  8 06:30:12 np0005475493 ceph-mon[73572]: log_channel(audit) log [INF] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:30:12 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:30:12 np0005475493 ceph-mon[73572]: from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' 
Oct  8 06:30:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:13.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:13 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1374: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  8 06:30:13 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:13 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:13 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:13.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:13 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:30:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:30:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:30:14 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:14 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:30:14 np0005475493 nova_compute[262220]: 2025-10-08 10:30:14.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:30:14 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:30:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:15.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:15 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1375: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  8 06:30:15 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:15 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:30:15 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:15.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:30:15 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:30:15] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct  8 06:30:15 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:30:15] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct  8 06:30:16 np0005475493 nova_compute[262220]: 2025-10-08 10:30:16.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:30:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:30:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:17.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:30:17 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:17.272Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:30:17 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1376: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  8 06:30:17 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:17 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:30:17 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:17.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:30:17 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:30:17 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:30:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:30:17 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:30:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:30:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:30:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:30:18 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:30:18 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:18.911Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:30:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:30:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:30:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:18 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:30:19 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:19 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:30:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:19.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:19 np0005475493 nova_compute[262220]: 2025-10-08 10:30:19.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:30:19 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1377: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.577369) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919419577436, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1314, "num_deletes": 255, "total_data_size": 2407130, "memory_usage": 2446752, "flush_reason": "Manual Compaction"}
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919419590807, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 2354481, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36992, "largest_seqno": 38305, "table_properties": {"data_size": 2348246, "index_size": 3434, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13278, "raw_average_key_size": 19, "raw_value_size": 2335655, "raw_average_value_size": 3496, "num_data_blocks": 149, "num_entries": 668, "num_filter_entries": 668, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759919300, "oldest_key_time": 1759919300, "file_creation_time": 1759919419, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 13451 microseconds, and 5398 cpu microseconds.
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.590847) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 2354481 bytes OK
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.590870) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.592611) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.592624) EVENT_LOG_v1 {"time_micros": 1759919419592620, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.592646) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2401349, prev total WAL file size 2401349, number of live WAL files 2.
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.593999) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303037' seq:72057594037927935, type:22 .. '6C6F676D0031323538' seq:0, type:0; will stop at (end)
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(2299KB)], [80(11MB)]
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919419594190, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 14882499, "oldest_snapshot_seqno": -1}
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6824 keys, 14719138 bytes, temperature: kUnknown
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919419710154, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 14719138, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14674184, "index_size": 26794, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 179545, "raw_average_key_size": 26, "raw_value_size": 14551831, "raw_average_value_size": 2132, "num_data_blocks": 1056, "num_entries": 6824, "num_filter_entries": 6824, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759916581, "oldest_key_time": 0, "file_creation_time": 1759919419, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fe81d9b-468a-4413-adf1-4e4bd83383d4", "db_session_id": "KN4HYS7VUCE6V85JIQOU", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.710484) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 14719138 bytes
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.711977) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.2 rd, 126.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 11.9 +0.0 blob) out(14.0 +0.0 blob), read-write-amplify(12.6) write-amplify(6.3) OK, records in: 7352, records dropped: 528 output_compression: NoCompression
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.712010) EVENT_LOG_v1 {"time_micros": 1759919419711995, "job": 46, "event": "compaction_finished", "compaction_time_micros": 116045, "compaction_time_cpu_micros": 58132, "output_level": 6, "num_output_files": 1, "total_output_size": 14719138, "num_input_records": 7352, "num_output_records": 6824, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919419712925, "job": 46, "event": "table_file_deletion", "file_number": 82}
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759919419716536, "job": 46, "event": "table_file_deletion", "file_number": 80}
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.593196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.716628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.716633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.716635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.716637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:30:19 np0005475493 ceph-mon[73572]: rocksdb: (Original Log Time 2025/10/08-10:30:19.716639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  8 06:30:19 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:19 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:19 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:19.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:30:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:21.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:30:21 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1378: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:30:21 np0005475493 nova_compute[262220]: 2025-10-08 10:30:21.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:30:21 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:21 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:30:21 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:21.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:30:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:23.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:23 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1379: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:30:23 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:23 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:23 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:23.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:30:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:30:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:23 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:30:24 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:24 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:30:24 np0005475493 nova_compute[262220]: 2025-10-08 10:30:24.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:30:24 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:30:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:25.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:25 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1380: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:30:25 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:25 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:25 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:25.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:25 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:30:25] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:30:25 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:30:25] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:30:26 np0005475493 nova_compute[262220]: 2025-10-08 10:30:26.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:30:26 np0005475493 podman[300732]: 2025-10-08 10:30:26.91733188 +0000 UTC m=+0.075189810 container health_status 2ac58bf9d2c7421f24478941b0f23af12cc30298ac84467db16bf52ecb1157c3 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  8 06:30:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:27.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:27 np0005475493 systemd-logind[798]: New session 61 of user zuul.
Oct  8 06:30:27 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:27.273Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:30:27 np0005475493 systemd[1]: Started Session 61 of User zuul.
Oct  8 06:30:27 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1381: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:30:27 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:27 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:27 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:27.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:28.912Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:30:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:28.913Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:30:28 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:28.914Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:30:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:30:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:30:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:28 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:30:29 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:29 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:30:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:29.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:29 np0005475493 nova_compute[262220]: 2025-10-08 10:30:29.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:30:29 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1382: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:30:29 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:30:29 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:29 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:29 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:29.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:29 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27563 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:29 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27268 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:30 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17466 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:30 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27575 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:30 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27274 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:30 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17475 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:31 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct  8 06:30:31 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1971704690' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  8 06:30:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:31.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:31 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1383: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:30:31 np0005475493 nova_compute[262220]: 2025-10-08 10:30:31.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:30:31 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:31 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:31 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:31.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:32 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:30:32 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:30:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:30:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:33.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:30:33 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1384: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:30:33 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:33 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:33 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:33.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:30:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:30:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:33 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:30:34 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:34 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:30:34 np0005475493 nova_compute[262220]: 2025-10-08 10:30:34.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:30:34 np0005475493 ovs-vsctl[301114]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct  8 06:30:34 np0005475493 podman[301124]: 2025-10-08 10:30:34.463771858 +0000 UTC m=+0.094881200 container health_status 750e81e9592502f2fa35d047c8ae9bd75eb38ed415148db558454f5281397e7f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  8 06:30:34 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:30:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:35.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:35 np0005475493 virtqemud[261885]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct  8 06:30:35 np0005475493 virtqemud[261885]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct  8 06:30:35 np0005475493 virtqemud[261885]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  8 06:30:35 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1385: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:30:35 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27596 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:35 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:30:35] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:30:35 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:30:35] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:30:35 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:35 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:35 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:35.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:35 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: cache status {prefix=cache status} (starting...)
Oct  8 06:30:35 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct  8 06:30:35 np0005475493 lvm[301450]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  8 06:30:35 np0005475493 lvm[301450]: VG ceph_vg0 finished
Oct  8 06:30:35 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct  8 06:30:35 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  8 06:30:36 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: client ls {prefix=client ls} (starting...)
Oct  8 06:30:36 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct  8 06:30:36 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27608 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:36 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27620 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:36 np0005475493 nova_compute[262220]: 2025-10-08 10:30:36.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:30:36 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: damage ls {prefix=damage ls} (starting...)
Oct  8 06:30:36 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct  8 06:30:36 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27301 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:36 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17499 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:36 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: dump loads {prefix=dump loads} (starting...)
Oct  8 06:30:36 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct  8 06:30:36 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct  8 06:30:36 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2044351178' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  8 06:30:36 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27632 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:36 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct  8 06:30:36 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct  8 06:30:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct  8 06:30:37 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  8 06:30:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  8 06:30:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:37.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  8 06:30:37 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct  8 06:30:37 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct  8 06:30:37 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17517 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:37 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27325 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:37 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct  8 06:30:37 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct  8 06:30:37 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:37.274Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:30:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  8 06:30:37 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/366398183' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  8 06:30:37 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1386: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:30:37 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct  8 06:30:37 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct  8 06:30:37 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17532 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:37 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27343 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:37 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27665 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:37 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct  8 06:30:37 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct  8 06:30:37 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Oct  8 06:30:37 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4000713736' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct  8 06:30:37 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:37 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:37 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:37.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:37 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct  8 06:30:37 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct  8 06:30:38 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17544 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:38 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: ops {prefix=ops} (starting...)
Oct  8 06:30:38 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct  8 06:30:38 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27683 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Oct  8 06:30:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/405516947' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct  8 06:30:38 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27361 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Oct  8 06:30:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/803082500' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct  8 06:30:38 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17565 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct  8 06:30:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  8 06:30:38 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27385 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:38 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: session ls {prefix=session ls} (starting...)
Oct  8 06:30:38 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril Can't run that command on an inactive MDS!
Oct  8 06:30:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:38.914Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:30:38 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:38.915Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:30:38 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17589 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:38 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct  8 06:30:38 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2080598880' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  8 06:30:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:30:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:30:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:38 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:30:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:39 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:30:39 np0005475493 ceph-mds[95385]: mds.cephfs.compute-0.lphril asok_command: status {prefix=status} (starting...)
Oct  8 06:30:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:30:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:39.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:30:39 np0005475493 nova_compute[262220]: 2025-10-08 10:30:39.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:30:39 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27409 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:39 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1387: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:30:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct  8 06:30:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2740380572' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  8 06:30:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct  8 06:30:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3866329630' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  8 06:30:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:30:39 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27737 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:39 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T10:30:39.640+0000 7fa108681640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  8 06:30:39 np0005475493 ceph-mgr[73869]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  8 06:30:39 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:39 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:39 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:39.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct  8 06:30:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  8 06:30:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct  8 06:30:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3456496455' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  8 06:30:39 np0005475493 podman[302023]: 2025-10-08 10:30:39.932240768 +0000 UTC m=+0.081865478 container health_status 1ece418ccdf19a8019e8be8e665c938dc3c333817a211973536e2416f095c311 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  8 06:30:39 np0005475493 podman[302024]: 2025-10-08 10:30:39.934291864 +0000 UTC m=+0.083891413 container health_status 96c17e36c3e57b043a764fc4d64e20610b8683e4881cc1857643eca7aeb37784 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  8 06:30:39 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Oct  8 06:30:39 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3477387204' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct  8 06:30:40 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17637 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:40 np0005475493 ceph-mgr[73869]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  8 06:30:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T10:30:40.359+0000 7fa108681640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  8 06:30:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  8 06:30:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1520230029' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  8 06:30:40 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27788 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:40 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27466 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:40 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: 2025-10-08T10:30:40.754+0000 7fa108681640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  8 06:30:40 np0005475493 ceph-mgr[73869]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  8 06:30:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Oct  8 06:30:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3118741586' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct  8 06:30:40 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct  8 06:30:40 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1905105844' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  8 06:30:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:41.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:41 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27803 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:41 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Oct  8 06:30:41 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2869719441' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct  8 06:30:41 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct  8 06:30:41 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4055743888' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  8 06:30:41 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1388: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:30:41 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17685 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:41 np0005475493 nova_compute[262220]: 2025-10-08 10:30:41.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:30:41 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:41 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:30:41 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:41.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:30:41 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct  8 06:30:41 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3778309663' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  8 06:30:41 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17697 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:41 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27836 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:41 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27511 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:42 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17706 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct  8 06:30:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1313256790' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  8 06:30:42 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27535 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:42 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27854 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:42 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17724 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 2801664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 2801664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 2801664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 2801664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 2801664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8400 session 0x559f2d953680
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8800 session 0x559f2dbe94a0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997669 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.191692352s of 38.196037292s, submitted: 1
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 2793472 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999313 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000825 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 2785280 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000825 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.237012863s of 15.247964859s, submitted: 3
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 2777088 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 2768896 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8c00 session 0x559f2d82f2c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000693 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.051769257s of 39.055622101s, submitted: 1
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d961680
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d680c00 session 0x559f2a95b680
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000825 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000825 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2760704 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.737722397s of 12.740792274s, submitted: 1
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000957 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 2752512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86335488 unmapped: 1703936 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000234 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000234 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.218849182s of 12.235140800s, submitted: 3
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000234 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 2744320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000102 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000102 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000102 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c4243c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9000 session 0x559f2d82e3c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000102 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000102 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread fragmentation_score=0.000031 took=0.000080s
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 2711552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 2711552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 2711552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 2711552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000102 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.856376648s of 34.864582062s, submitted: 2
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 2711552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 2711552 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37e000 session 0x559f2a9a3a40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001746 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001746 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9000 session 0x559f2d9534a0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37e800 session 0x559f2d9612c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.321186066s of 12.326921463s, submitted: 2
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001614 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001614 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 2695168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001746 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 2686976 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 2686976 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 2686976 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.701647758s of 15.710140228s, submitted: 2
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003258 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002667 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8800 session 0x559f2a9703c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002535 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002535 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 2670592 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.336950302s of 21.398941040s, submitted: 3
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002667 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 2662400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 2662400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 2662400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 2662400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 2662400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004179 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 2662400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005100 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.527006149s of 15.556138039s, submitted: 4
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004968 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 2646016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004968 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004968 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d5b2000 session 0x559f2cadd2c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d680c00 session 0x559f2d961c20
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 2637824 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004968 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004968 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.097640991s of 22.100765228s, submitted: 1
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 2629632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005100 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005100 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.092028618s of 12.143527031s, submitted: 2
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003918 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003786 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 2621440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003786 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003786 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 2613248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8800 session 0x559f2a9550e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37e800 session 0x559f2d82fe00
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003786 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 2605056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: mgrc ms_handle_reset ms_handle_reset con 0x559f2abaa000
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3802415056
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3802415056,v1:192.168.122.100:6801/3802415056]
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: mgrc handle_mgr_configure stats_period=5
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003786 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.943304062s of 30.003890991s, submitted: 2
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003918 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005430 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27550 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005430 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005430 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 2596864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.837564468s of 17.844263077s, submitted: 2
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d82e1e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d82ef00
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005298 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005298 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.888109207s of 10.891509056s, submitted: 1
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005430 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 2588672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86499328 unmapped: 1540096 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006942 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.085538864s of 12.127921104s, submitted: 2
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006351 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006219 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d680c00 session 0x559f2d5ef2c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9000 session 0x559f2c5c8b40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006219 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006219 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006219 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.218805313s of 21.227340698s, submitted: 2
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1531904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006351 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007863 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.160308838s of 12.177426338s, submitted: 2
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007272 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007140 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 1515520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d5b2400 session 0x559f2da1f0e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d5b2000 session 0x559f2a8670e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 1515520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 1515520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 1515520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007140 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 1515520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 1515520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 1507328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 1507328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 1507328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007140 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 1507328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 1507328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 1507328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.320930481s of 20.461774826s, submitted: 2
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007272 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 1523712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 9009 writes, 35K keys, 9009 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 9009 writes, 1887 syncs, 4.77 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 764 writes, 1222 keys, 764 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s#012Interval WAL: 764 writes, 362 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f28fb7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008784 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.007425308s of 10.107902527s, submitted: 3
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009705 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 1490944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 1482752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 1474560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 1474560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 1474560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 1474560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 1474560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 1474560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8400 session 0x559f2d70cb40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37f400 session 0x559f2d5ee1e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 1466368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009573 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 45.391696930s of 45.435684204s, submitted: 2
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009705 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011217 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86589440 unmapped: 1449984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010035 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.340482712s of 13.399305344s, submitted: 4
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.399309158s of 14.402190208s, submitted: 1
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 1441792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86638592 unmapped: 1400832 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,0,0,4])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 1458176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,0,0,1,2])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86614016 unmapped: 1425408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009975 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86794240 unmapped: 2293760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 2260992 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct  8 06:30:42 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4046998603' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37e800 session 0x559f2dc09680
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d680c00 session 0x559f2d9612c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 2252800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8400 session 0x559f2c8afe00
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2b37f400 session 0x559f2d953a40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 53.254104614s of 57.032154083s, submitted: 332
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010035 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010035 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010167 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.976808548s of 16.986804962s, submitted: 2
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010035 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d70de00
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d0e8800 session 0x559f2d554960
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010035 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 2244608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009903 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.073468208s of 12.254982948s, submitted: 3
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010035 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011547 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 2236416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 2228224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 2228224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 2228224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011547 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 2228224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 2228224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 2228224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.275589943s of 15.385351181s, submitted: 2
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d5b2400 session 0x559f2d82e960
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 ms_handle_reset con 0x559f2d5b2000 session 0x559f2d82ef00
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011415 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 2211840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 40.478878021s of 40.551963806s, submitted: 1
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 2203648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 2203648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011547 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 2203648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 2203648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 2203648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 2195456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013059 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.107625008s of 12.130958557s, submitted: 2
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012468 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012336 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012336 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012336 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012336 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc64d000/0x0/0x4ffc00000, data 0x103bff/0x1bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 2187264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.331020355s of 23.338811874s, submitted: 2
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 2179072 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 2154496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021781 data_alloc: 218103808 data_used: 167936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 151 handle_osd_map epochs [151,151], i have 151, src has [1,151]
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x107e4e/0x1c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,1])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 2146304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 151 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d952960
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 2146304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 151 ms_handle_reset con 0x559f2d0e8400 session 0x559f2d5ee1e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 151 ms_handle_reset con 0x559f2b37f400 session 0x559f2d5ef2c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 18898944 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 18898944 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 152 ms_handle_reset con 0x559f2d680c00 session 0x559f2d555680
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fbe3e000/0x0/0x4ffc00000, data 0x90c0a4/0x9ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 18898944 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083662 data_alloc: 218103808 data_used: 176128
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 18898944 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 18898944 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3a000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087260 data_alloc: 218103808 data_used: 176128
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3a000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.067012787s of 14.482573509s, submitted: 64
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087392 data_alloc: 218103808 data_used: 176128
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3b000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 18890752 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 18882560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089576 data_alloc: 218103808 data_used: 176128
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 18882560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 18882560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3b000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 18882560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 18882560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 18882560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.071710587s of 12.114167213s, submitted: 3
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088985 data_alloc: 218103808 data_used: 176128
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 17833984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 17833984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3b000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 17833984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088853 data_alloc: 218103808 data_used: 176128
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3b000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088853 data_alloc: 218103808 data_used: 176128
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 18866176 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 18857984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3b000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 18857984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 18857984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088853 data_alloc: 218103808 data_used: 176128
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 18857984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 18857984 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 153 ms_handle_reset con 0x559f2d5b2000 session 0x559f2d6370e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 153 ms_handle_reset con 0x559f2d5b2400 session 0x559f2dbe8b40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 153 ms_handle_reset con 0x559f2d3c4800 session 0x559f2c5dfc20
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 18849792 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 153 ms_handle_reset con 0x559f2d5b2800 session 0x559f2a866000
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbe3b000/0x0/0x4ffc00000, data 0x90e076/0x9d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 153 ms_handle_reset con 0x559f2d3c4800 session 0x559f2a975680
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 18849792 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 153 ms_handle_reset con 0x559f2d5b2000 session 0x559f2a95bc20
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 18849792 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.177728653s of 20.183889389s, submitted: 2
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092771 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 18849792 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 155 ms_handle_reset con 0x559f2d5b2400 session 0x559f2b6512c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 155 ms_handle_reset con 0x559f2d680c00 session 0x559f2a9a3e00
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 155 ms_handle_reset con 0x559f2d5b2c00 session 0x559f2d960780
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 155 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d82fa40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 155 ms_handle_reset con 0x559f2d5b2000 session 0x559f2d554780
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fb528000/0x0/0x4ffc00000, data 0x121c314/0x12e3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 18595840 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 18595840 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 155 ms_handle_reset con 0x559f2d5b2400 session 0x559f2dbe81e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165866 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87302144 unmapped: 18571264 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 87318528 unmapped: 18554880 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb528000/0x0/0x4ffc00000, data 0x121c337/0x12e4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 95657984 unmapped: 10215424 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97189888 unmapped: 8683520 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97189888 unmapped: 8683520 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233556 data_alloc: 234881024 data_used: 9666560
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 8667136 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb524000/0x0/0x4ffc00000, data 0x121e309/0x12e7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233556 data_alloc: 234881024 data_used: 9666560
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb524000/0x0/0x4ffc00000, data 0x121e309/0x12e7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.378948212s of 17.598480225s, submitted: 58
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 8634368 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103514112 unmapped: 8765440 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102342656 unmapped: 9936896 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346036 data_alloc: 234881024 data_used: 10461184
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102342656 unmapped: 9936896 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102498304 unmapped: 9781248 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 9773056 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e8400 session 0x559f2da1f0e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9000 session 0x559f2d555e00
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 9773056 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 9773056 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346036 data_alloc: 234881024 data_used: 10461184
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 9773056 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 9773056 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 9773056 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102547456 unmapped: 9732096 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102547456 unmapped: 9732096 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346948 data_alloc: 234881024 data_used: 10530816
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102547456 unmapped: 9732096 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102547456 unmapped: 9732096 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102555648 unmapped: 9723904 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102555648 unmapped: 9723904 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.637916565s of 16.192432404s, submitted: 74
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102555648 unmapped: 9723904 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347080 data_alloc: 234881024 data_used: 10530816
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102555648 unmapped: 9723904 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9000 session 0x559f2a9543c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d953a40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2000 session 0x559f2d952960
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2d82e960
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d554b40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2a954b40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102547456 unmapped: 9732096 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2d82fe00
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2d82ef00
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102604800 unmapped: 9674752 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9000 session 0x559f2d554960
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2dbe90e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d6370e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2c5fc960
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2a9a3e00
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 9101312 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9409000/0x0/0x4ffc00000, data 0x2199319/0x2263000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 9101312 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d82e000
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367280 data_alloc: 234881024 data_used: 10534912
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 9101312 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 9101312 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2c424000
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 9101312 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2000 session 0x559f2cbf7680
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2d70d2c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102727680 unmapped: 9551872 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 102727680 unmapped: 9551872 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9408000/0x0/0x4ffc00000, data 0x219933c/0x2264000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1378901 data_alloc: 234881024 data_used: 11943936
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103833600 unmapped: 8445952 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103833600 unmapped: 8445952 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.715806007s of 13.765681267s, submitted: 16
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103833600 unmapped: 8445952 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103833600 unmapped: 8445952 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103841792 unmapped: 8437760 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9408000/0x0/0x4ffc00000, data 0x219933c/0x2264000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1378853 data_alloc: 234881024 data_used: 11948032
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103874560 unmapped: 8404992 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103874560 unmapped: 8404992 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9408000/0x0/0x4ffc00000, data 0x219933c/0x2264000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103907328 unmapped: 8372224 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103907328 unmapped: 8372224 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103907328 unmapped: 8372224 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1379774 data_alloc: 234881024 data_used: 11948032
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103907328 unmapped: 8372224 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9408000/0x0/0x4ffc00000, data 0x219933c/0x2264000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103907328 unmapped: 8372224 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.078499794s of 10.066446304s, submitted: 47
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 3858432 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 3768320 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 4235264 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423758 data_alloc: 234881024 data_used: 13017088
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 4235264 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 4235264 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e52000/0x0/0x4ffc00000, data 0x274f33c/0x281a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e52000/0x0/0x4ffc00000, data 0x274f33c/0x281a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 4235264 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 4235264 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 4235264 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1422330 data_alloc: 234881024 data_used: 13017088
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108322816 unmapped: 3956736 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108322816 unmapped: 3956736 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e31000/0x0/0x4ffc00000, data 0x277033c/0x283b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2a974000
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.945456505s of 10.029915810s, submitted: 20
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 5545984 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2d8d0960
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a6000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352840 data_alloc: 234881024 data_used: 10534912
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a6000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352840 data_alloc: 234881024 data_used: 10534912
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 5537792 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2c00 session 0x559f2d555c20
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2c36be00
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2d5ef4a0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95a7000/0x0/0x4ffc00000, data 0x1ffc309/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 11517952 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118844 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118844 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118844 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118844 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118844 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2c8b0b40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c8b03c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2c8b0d20
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2c00 session 0x559f2b2d8b40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2b2d8000
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4faa3c000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2a999e00
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.161369324s of 37.341365814s, submitted: 63
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 12132352 heap: 112279552 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2a9983c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2a996b40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3400 session 0x559f2a9974a0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3400 session 0x559f2a958960
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37f400 session 0x559f2a9583c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198144 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 26476544 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 26476544 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa225000/0x0/0x4ffc00000, data 0x1380284/0x1447000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 26476544 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2a9703c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 26476544 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 26476544 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2d5ef4a0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198144 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 26476544 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2d5ee5a0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2d5ee1e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 26755072 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 26755072 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2000 session 0x559f2b2d92c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d82fa40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x1380294/0x1448000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 100253696 unmapped: 26722304 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x1380294/0x1448000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1272627 data_alloc: 234881024 data_used: 10821632
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x1380294/0x1448000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1272627 data_alloc: 234881024 data_used: 10821632
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22323200 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x1380294/0x1448000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.155124664s of 19.309776306s, submitted: 20
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107364352 unmapped: 19611648 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299469 data_alloc: 234881024 data_used: 11239424
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 17702912 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 17702912 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ea1000/0x0/0x4ffc00000, data 0x16ed294/0x17b5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 17670144 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 17670144 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 17670144 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1311047 data_alloc: 234881024 data_used: 11096064
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 17670144 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e96000/0x0/0x4ffc00000, data 0x170e294/0x17d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 18505728 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 18505728 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 18497536 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e96000/0x0/0x4ffc00000, data 0x170e294/0x17d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 18497536 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303199 data_alloc: 234881024 data_used: 11096064
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 18497536 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e96000/0x0/0x4ffc00000, data 0x170e294/0x17d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 18497536 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.776124001s of 13.241639137s, submitted: 70
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 18497536 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 18448384 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 18448384 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303147 data_alloc: 234881024 data_used: 11096064
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e90000/0x0/0x4ffc00000, data 0x1714294/0x17dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18440192 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e90000/0x0/0x4ffc00000, data 0x1714294/0x17dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18440192 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18440192 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e90000/0x0/0x4ffc00000, data 0x1714294/0x17dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 18432000 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 18423808 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303235 data_alloc: 234881024 data_used: 11096064
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 18423808 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e8d000/0x0/0x4ffc00000, data 0x1717294/0x17df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 18423808 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 18423808 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e8d000/0x0/0x4ffc00000, data 0x1717294/0x17df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 18423808 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e8d000/0x0/0x4ffc00000, data 0x1717294/0x17df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 18423808 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.900504112s of 12.918242455s, submitted: 5
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304083 data_alloc: 234881024 data_used: 11104256
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 18309120 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 18309120 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9e82000/0x0/0x4ffc00000, data 0x1722294/0x17ea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c5c8f00
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2cc785a0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 18309120 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2a996000
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128836 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128836 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128836 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128836 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128836 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128836 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2da1f860
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2000 session 0x559f2d636f00
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b2400 session 0x559f2d5ee3c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2cc5ed20
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 25329664 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.925148010s of 34.002922058s, submitted: 29
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2a9925a0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d82e3c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d82fe00
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c000 session 0x559f2c5c9860
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c400 session 0x559f2b2d8000
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101851136 unmapped: 25124864 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193305 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa497000/0x0/0x4ffc00000, data 0x110d2e6/0x11d5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101851136 unmapped: 25124864 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101851136 unmapped: 25124864 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101851136 unmapped: 25124864 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101851136 unmapped: 25124864 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101851136 unmapped: 25124864 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195599 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c400 session 0x559f2a958960
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101908480 unmapped: 25067520 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa497000/0x0/0x4ffc00000, data 0x110d2e6/0x11d5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 101916672 unmapped: 25059328 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103686144 unmapped: 23289856 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244379 data_alloc: 218103808 data_used: 7331840
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa496000/0x0/0x4ffc00000, data 0x110d309/0x11d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa496000/0x0/0x4ffc00000, data 0x110d309/0x11d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244379 data_alloc: 218103808 data_used: 7331840
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 23093248 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.476533890s of 18.995376587s, submitted: 43
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 20316160 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa178000/0x0/0x4ffc00000, data 0x142b309/0x14f4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108077056 unmapped: 18898944 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa100000/0x0/0x4ffc00000, data 0x14a3309/0x156c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109371392 unmapped: 17604608 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa100000/0x0/0x4ffc00000, data 0x14a3309/0x156c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282047 data_alloc: 218103808 data_used: 8519680
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 17530880 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f1000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 17530880 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f1000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109445120 unmapped: 17530880 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 17522688 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f1000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 17522688 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282047 data_alloc: 218103808 data_used: 8519680
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 17522688 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 17522688 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 17522688 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109453312 unmapped: 17522688 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f1000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282047 data_alloc: 218103808 data_used: 8519680
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f1000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282047 data_alloc: 218103808 data_used: 8519680
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109461504 unmapped: 17514496 heap: 126976000 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f1000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2dbe81e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c000 session 0x559f2c424b40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c800 session 0x559f2c5df860
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76cc00 session 0x559f2c8b1e00
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.359991074s of 18.809175491s, submitted: 62
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76cc00 session 0x559f2c8b1c20
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2a9a2960
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c000 session 0x559f2d5ee000
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c400 session 0x559f2d5ee780
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c800 session 0x559f2d5eeb40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 22970368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 22970368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 22962176 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1d0f319/0x1dd9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 22962176 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1344055 data_alloc: 218103808 data_used: 8523776
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 22962176 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 22962176 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2c5da1e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 22962176 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 22773760 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1d0f319/0x1dd9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 16949248 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403516 data_alloc: 234881024 data_used: 15618048
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 16949248 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1d0f319/0x1dd9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16932864 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1d0f319/0x1dd9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16932864 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16932864 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16932864 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.809376717s of 14.030103683s, submitted: 19
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403852 data_alloc: 234881024 data_used: 15618048
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16932864 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 16924672 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1d0f319/0x1dd9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 16924672 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 16924672 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 115367936 unmapped: 15810560 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1431280 data_alloc: 234881024 data_used: 16175104
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 13950976 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118071296 unmapped: 13107200 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9519000/0x0/0x4ffc00000, data 0x2081319/0x214b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 13074432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 13074432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 13074432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439394 data_alloc: 234881024 data_used: 16089088
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 13074432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 13074432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27869 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9519000/0x0/0x4ffc00000, data 0x2081319/0x214b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118145024 unmapped: 13033472 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118145024 unmapped: 13033472 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9519000/0x0/0x4ffc00000, data 0x2081319/0x214b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118145024 unmapped: 13033472 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c000 session 0x559f2a866b40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.375069618s of 14.576653481s, submitted: 66
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c400 session 0x559f2d8d0000
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286810 data_alloc: 218103808 data_used: 6938624
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d800 session 0x559f2dbe94a0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 19349504 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 19349504 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e8400 session 0x559f2c8ae780
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9000 session 0x559f2a997c20
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 19349504 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0f2000/0x0/0x4ffc00000, data 0x14b1309/0x157a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 19349504 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d9605a0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2c8b1a40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 19349504 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150044 data_alloc: 218103808 data_used: 184320
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2b6512c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa91a000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148764 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa91a000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.605167389s of 13.440299034s, submitted: 69
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa91a000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa91a000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148896 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151336 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151336 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.465369225s of 14.476176262s, submitted: 3
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151204 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151204 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 23994368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d800 session 0x559f2cc5e000
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76dc00 session 0x559f2d82e780
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76dc00 session 0x559f2d0534a0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c8afc20
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d8d0d20
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2cbf7680
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa77b000/0x0/0x4ffc00000, data 0xe2a2d6/0xef1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190883 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d400 session 0x559f2d8d05a0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 2666 syncs, 4.09 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1892 writes, 5856 keys, 1892 commit groups, 1.0 writes per commit group, ingest: 6.53 MB, 0.01 MB/s#012Interval WAL: 1892 writes, 779 syncs, 2.43 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa77b000/0x0/0x4ffc00000, data 0xe2a2d6/0xef1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d400 session 0x559f2cadd2c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 24354816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c59a000 session 0x559f2cc5fc20
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.399578094s of 17.499835968s, submitted: 27
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c59bc00 session 0x559f2d637680
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192697 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 24371200 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 24371200 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa77a000/0x0/0x4ffc00000, data 0xe2a2e6/0xef2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108134400 unmapped: 23044096 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa77a000/0x0/0x4ffc00000, data 0xe2a2e6/0xef2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 23011328 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 23011328 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228417 data_alloc: 218103808 data_used: 5488640
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 23011328 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 23011328 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 23003136 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa77a000/0x0/0x4ffc00000, data 0xe2a2e6/0xef2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 22970368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 22970368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228417 data_alloc: 218103808 data_used: 5488640
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 22970368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 22970368 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.437581062s of 12.444223404s, submitted: 1
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 21872640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109371392 unmapped: 21807104 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4e0000/0x0/0x4ffc00000, data 0x10be2e6/0x1186000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 21282816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256751 data_alloc: 218103808 data_used: 5910528
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 21282816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 21282816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 21282816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 21282816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109895680 unmapped: 21282816 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256751 data_alloc: 218103808 data_used: 5910528
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 21274624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 21274624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 21274624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109903872 unmapped: 21274624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256751 data_alloc: 218103808 data_used: 5910528
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256751 data_alloc: 218103808 data_used: 5910528
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 21266432 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256751 data_alloc: 218103808 data_used: 5910528
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21258240 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2a9703c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d800 session 0x559f2c36ba40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c59a000 session 0x559f2c36a1e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c59bc00 session 0x559f2cc5ed20
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.291732788s of 27.440547943s, submitted: 53
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 109658112 unmapped: 21520384 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2dbe92c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283893 data_alloc: 218103808 data_used: 5914624
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1a1000/0x0/0x4ffc00000, data 0x14032e6/0x14cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1a1000/0x0/0x4ffc00000, data 0x14032e6/0x14cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1a1000/0x0/0x4ffc00000, data 0x14032e6/0x14cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284061 data_alloc: 218103808 data_used: 5914624
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 20471808 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1a1000/0x0/0x4ffc00000, data 0x14032e6/0x14cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305797 data_alloc: 218103808 data_used: 9158656
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1a1000/0x0/0x4ffc00000, data 0x14032e6/0x14cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1a1000/0x0/0x4ffc00000, data 0x14032e6/0x14cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305797 data_alloc: 218103808 data_used: 9158656
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 19226624 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.994756699s of 18.046251297s, submitted: 9
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 18489344 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 16203776 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 16433152 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f973e000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392039 data_alloc: 234881024 data_used: 9400320
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 16433152 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 16433152 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f973e000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16424960 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16424960 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16424960 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392055 data_alloc: 234881024 data_used: 9400320
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16424960 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16424960 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16424960 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f973e000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 16400384 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f973e000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 16400384 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392055 data_alloc: 234881024 data_used: 9400320
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.862829208s of 13.072974205s, submitted: 92
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113451008 unmapped: 17727488 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b646c00 session 0x559f2c5fc5a0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b647c00 session 0x559f2b6505a0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 17719296 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9754000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 17719296 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37e000 session 0x559f2d953860
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 17719296 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 17719296 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382143 data_alloc: 234881024 data_used: 9400320
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 17719296 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9754000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 17719296 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113467392 unmapped: 17711104 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113467392 unmapped: 17711104 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9754000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113483776 unmapped: 17694720 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381879 data_alloc: 234881024 data_used: 9400320
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.620989799s of 10.001231194s, submitted: 134
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 17547264 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 17375232 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 17375232 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 17375232 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 17367040 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382143 data_alloc: 234881024 data_used: 9400320
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 17367040 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 17367040 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 17358848 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 17358848 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 17358848 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382143 data_alloc: 234881024 data_used: 9400320
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 17358848 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.531607628s of 10.991118431s, submitted: 201
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 17358848 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 17350656 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 17350656 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 17350656 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382143 data_alloc: 234881024 data_used: 9400320
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 17350656 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382143 data_alloc: 234881024 data_used: 9400320
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 17342464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 17334272 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 17334272 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.583388329s of 13.592965126s, submitted: 2
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384159 data_alloc: 234881024 data_used: 9388032
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 17211392 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 17211392 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 17211392 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 17203200 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113983488 unmapped: 17195008 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x1e502e6/0x1f18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384663 data_alloc: 234881024 data_used: 9388032
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113983488 unmapped: 17195008 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113983488 unmapped: 17195008 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d400 session 0x559f2d637860
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 18366464 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37e000 session 0x559f2b2d8b40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0c8000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0c8000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259535 data_alloc: 218103808 data_used: 5898240
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37fc00 session 0x559f2d052b40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.979496956s of 13.032286644s, submitted: 26
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259703 data_alloc: 218103808 data_used: 5898240
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0c8000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa0c8000/0x0/0x4ffc00000, data 0x10cc2e6/0x1194000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d5321e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c6481e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 19021824 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c59bc00 session 0x559f2dbe8960
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa486000/0x0/0x4ffc00000, data 0x914284/0x9db000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166102 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166102 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2da1c3c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2b37e800 session 0x559f2dbe9680
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166102 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166102 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.247751236s of 26.306289673s, submitted: 19
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166234 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 22904832 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166234 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa487000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165942 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.318322182s of 13.376296997s, submitted: 2
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 20733952 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c5df4a0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 22888448 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa672000/0x0/0x4ffc00000, data 0xb24274/0xbea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 22888448 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180354 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 22888448 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 22888448 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2a999e00
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 22888448 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2c8b14a0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 22888448 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f29d55c20
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa672000/0x0/0x4ffc00000, data 0xb24274/0xbea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d400 session 0x559f2c5c9860
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180354 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa672000/0x0/0x4ffc00000, data 0xb24274/0xbea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2cadc000
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 22896640 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.305717468s of 12.769754410s, submitted: 2
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166826 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2cc5e000
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166826 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 23887872 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166826 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 23879680 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 23879680 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 23879680 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 23879680 heap: 131178496 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.356574059s of 13.715682030s, submitted: 3
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2a971a40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2dd0ad20
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 30638080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237939 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 30638080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ee3000/0x0/0x4ffc00000, data 0x12b22d6/0x1379000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 30638080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 30638080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 30638080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76d400 session 0x559f2cc5eb40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 30334976 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239823 data_alloc: 218103808 data_used: 184320
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 30334976 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ebf000/0x0/0x4ffc00000, data 0x12d62d6/0x139d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 30171136 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 27516928 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 27516928 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: mgrc ms_handle_reset ms_handle_reset con 0x559f2d0e8c00
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3802415056
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3802415056,v1:192.168.122.100:6801/3802415056]
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: mgrc handle_mgr_configure stats_period=5
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 27516928 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1300015 data_alloc: 218103808 data_used: 9142272
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 27516928 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ebf000/0x0/0x4ffc00000, data 0x12d62d6/0x139d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 27516928 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2c36a3c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d82fa40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.177964211s of 13.560062408s, submitted: 29
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110714880 unmapped: 27820032 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2c5fc5a0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 31416320 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 31408128 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 31408128 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 31408128 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 31408128 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 31408128 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173331 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 36.629310608s of 38.173881531s, submitted: 16
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2c8b03c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4a7000/0x0/0x4ffc00000, data 0xcef274/0xdb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205109 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4a7000/0x0/0x4ffc00000, data 0xcef274/0xdb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76dc00 session 0x559f2c8ae780
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d636960
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2a866000
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2a955680
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 31375360 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205109 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 107323392 unmapped: 31211520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4a7000/0x0/0x4ffc00000, data 0xcef274/0xdb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4a7000/0x0/0x4ffc00000, data 0xcef274/0xdb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232013 data_alloc: 218103808 data_used: 4112384
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4a7000/0x0/0x4ffc00000, data 0xcef274/0xdb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232013 data_alloc: 218103808 data_used: 4112384
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30285824 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.716075897s of 20.768712997s, submitted: 10
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 22183936 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa4a7000/0x0/0x4ffc00000, data 0xcef274/0xdb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 114319360 unmapped: 24215552 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 24788992 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9bff000/0x0/0x4ffc00000, data 0x158f274/0x1655000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 24788992 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9bf5000/0x0/0x4ffc00000, data 0x1599274/0x165f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304949 data_alloc: 218103808 data_used: 5197824
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 24788992 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 24788992 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 24788992 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 24788992 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9bf5000/0x0/0x4ffc00000, data 0x1599274/0x165f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9bf5000/0x0/0x4ffc00000, data 0x1599274/0x165f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 24772608 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305101 data_alloc: 218103808 data_used: 5201920
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 24772608 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9bf5000/0x0/0x4ffc00000, data 0x1599274/0x165f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 24772608 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 24772608 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 24772608 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 24772608 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2a9774a0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305101 data_alloc: 218103808 data_used: 5201920
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.963165283s of 14.391463280s, submitted: 83
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2d8d10e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178119 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178119 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178119 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178119 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 27402240 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.524868011s of 22.775295258s, submitted: 9
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2d053a40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa46b000/0x0/0x4ffc00000, data 0xd2b274/0xdf1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111312896 unmapped: 27222016 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa46b000/0x0/0x4ffc00000, data 0xd2b274/0xdf1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 111312896 unmapped: 27222016 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251947 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f17000/0x0/0x4ffc00000, data 0x127f274/0x1345000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 28434432 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f17000/0x0/0x4ffc00000, data 0x127f274/0x1345000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f17000/0x0/0x4ffc00000, data 0x127f274/0x1345000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 28434432 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 28434432 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 28434432 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 28434432 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251947 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 28434432 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d82f0e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 28286976 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f17000/0x0/0x4ffc00000, data 0x127f274/0x1345000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 28286976 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x12a3297/0x136a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 28286976 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 25640960 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314620 data_alloc: 218103808 data_used: 8757248
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 25640960 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 25640960 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 25640960 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x12a3297/0x136a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 25640960 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 25640960 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314620 data_alloc: 218103808 data_used: 8757248
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 25632768 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 25632768 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 25632768 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x12a3297/0x136a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 25600000 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.691644669s of 20.797815323s, submitted: 21
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9866000/0x0/0x4ffc00000, data 0x1927297/0x19ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2caddc20
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120578048 unmapped: 17956864 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409342 data_alloc: 234881024 data_used: 10747904
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118767616 unmapped: 19767296 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118767616 unmapped: 19767296 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118767616 unmapped: 19767296 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b0c00 session 0x559f2a954b40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118784000 unmapped: 19750912 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d6ca800 session 0x559f2da1f860
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95d6000/0x0/0x4ffc00000, data 0x1bbe297/0x1c85000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2c8b10e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118800384 unmapped: 19734528 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2d5efe00
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410709 data_alloc: 234881024 data_used: 10760192
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118808576 unmapped: 19726336 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 19537920 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 18055168 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 18055168 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95d6000/0x0/0x4ffc00000, data 0x1bbe2a7/0x1c86000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 18055168 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426209 data_alloc: 234881024 data_used: 12935168
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 18055168 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95d6000/0x0/0x4ffc00000, data 0x1bbe2a7/0x1c86000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 18055168 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f95d6000/0x0/0x4ffc00000, data 0x1bbe2a7/0x1c86000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 18022400 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 18022400 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 18022400 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426209 data_alloc: 234881024 data_used: 12935168
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 18022400 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 17989632 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.309732437s of 18.578636169s, submitted: 92
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 14458880 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e3f000/0x0/0x4ffc00000, data 0x234f2a7/0x2417000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 13942784 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1498987 data_alloc: 234881024 data_used: 13889536
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e23000/0x0/0x4ffc00000, data 0x23682a7/0x2430000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e23000/0x0/0x4ffc00000, data 0x23682a7/0x2430000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1498987 data_alloc: 234881024 data_used: 13889536
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 13803520 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124747776 unmapped: 13787136 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124747776 unmapped: 13787136 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e23000/0x0/0x4ffc00000, data 0x23682a7/0x2430000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124747776 unmapped: 13787136 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1500203 data_alloc: 234881024 data_used: 13967360
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.244200706s of 13.420284271s, submitted: 77
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124747776 unmapped: 13787136 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124747776 unmapped: 13787136 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b0c00 session 0x559f2a976b40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2d5321e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 14450688 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e2c000/0x0/0x4ffc00000, data 0x23682a7/0x2430000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,1])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d608400 session 0x559f2d8d0d20
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 14450688 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 14450688 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387580 data_alloc: 234881024 data_used: 10768384
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 14450688 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 14450688 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2b6505a0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2d8d0f00
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d608400 session 0x559f2c6481e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f98fd000/0x0/0x4ffc00000, data 0x1898297/0x195f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197050 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa85d000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197050 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa85d000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197050 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa85d000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197050 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa85d000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa85d000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 21340160 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2dab5680
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d0e9800 session 0x559f2b2d90e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2d636780
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d5eeb40
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.304420471s of 28.535713196s, submitted: 47
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2a958f00
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d608400 session 0x559f2a9990e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b0c00 session 0x559f2cbf61e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2d8d1c20
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2a9961e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 21422080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 21422080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 21422080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 21422080 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2d9530e0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d608400 session 0x559f2d952960
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d680c00 session 0x559f2d953680
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2c8c1c00 session 0x559f2d952d20
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219133 data_alloc: 218103808 data_used: 704512
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21405696 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219133 data_alloc: 218103808 data_used: 704512
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 21397504 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa6ed000/0x0/0x4ffc00000, data 0xaa8284/0xb6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 21397504 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.240032196s of 18.298688889s, submitted: 18
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 19447808 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa113000/0x0/0x4ffc00000, data 0x1082284/0x1149000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266339 data_alloc: 218103808 data_used: 815104
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa10a000/0x0/0x4ffc00000, data 0x108a284/0x1151000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266339 data_alloc: 218103808 data_used: 815104
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa10a000/0x0/0x4ffc00000, data 0x108a284/0x1151000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa10a000/0x0/0x4ffc00000, data 0x108a284/0x1151000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa10a000/0x0/0x4ffc00000, data 0x108a284/0x1151000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 20201472 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.515064240s of 12.640249252s, submitted: 32
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d3c4800 session 0x559f2d5ee960
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265651 data_alloc: 218103808 data_used: 815104
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2d5b3000 session 0x559f2d6372c0
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 20881408 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 20873216 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 20865024 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 20856832 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 20701184 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: do_command 'config diff' '{prefix=config diff}'
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: do_command 'config show' '{prefix=config show}'
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: do_command 'counter dump' '{prefix=counter dump}'
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: do_command 'counter schema' '{prefix=counter schema}'
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 21078016 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117882880 unmapped: 20652032 heap: 138534912 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: do_command 'log dump' '{prefix=log dump}'
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117432320 unmapped: 32145408 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: do_command 'perf dump' '{prefix=perf dump}'
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: do_command 'perf schema' '{prefix=perf schema}'
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 32391168 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 32391168 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 32391168 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 32391168 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 32391168 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 32391168 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 32391168 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 32391168 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 32391168 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 32391168 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 32391168 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 32382976 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 32374784 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 32366592 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 32366592 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 32366592 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 32366592 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 32366592 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 32366592 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 32366592 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 32366592 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 32366592 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 32366592 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 32366592 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 32358400 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 32350208 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 32342016 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 32333824 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 32333824 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 32333824 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 32333824 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 32333824 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 32333824 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 32333824 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 32333824 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 32325632 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 12K writes, 3376 syncs, 3.74 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1713 writes, 5631 keys, 1713 commit groups, 1.0 writes per commit group, ingest: 6.95 MB, 0.01 MB/s#012Interval WAL: 1713 writes, 710 syncs, 2.41 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 32325632 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 32325632 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 32325632 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 32325632 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 32325632 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 32325632 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 32325632 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 32325632 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 32325632 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 32325632 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 32317440 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 32309248 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 32309248 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 32309248 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 32309248 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 32309248 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 32309248 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 32309248 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 32309248 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 32309248 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 32309248 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 32309248 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 32301056 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 32301056 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 32301056 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 32301056 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 32301056 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 32301056 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 32301056 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 32301056 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 32301056 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 32301056 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 32301056 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 32301056 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 32292864 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 32292864 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 32292864 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 32292864 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 32292864 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 32292864 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 32292864 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 32292864 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 32292864 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 32284672 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 32276480 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 32276480 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 32276480 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 32276480 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 32276480 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 32276480 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 32276480 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 32276480 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 32276480 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 32276480 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 32268288 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 32268288 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 272.678894043s of 272.762756348s, submitted: 25
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 32251904 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,1])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 32251904 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 32112640 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 29900800 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 29892608 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:42 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 29884416 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119701504 unmapped: 29876224 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119701504 unmapped: 29876224 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119701504 unmapped: 29876224 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119701504 unmapped: 29876224 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119701504 unmapped: 29876224 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119701504 unmapped: 29876224 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119701504 unmapped: 29876224 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119701504 unmapped: 29876224 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 29868032 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119717888 unmapped: 29859840 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119717888 unmapped: 29859840 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119717888 unmapped: 29859840 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119717888 unmapped: 29859840 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119717888 unmapped: 29859840 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119717888 unmapped: 29859840 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119717888 unmapped: 29859840 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119717888 unmapped: 29859840 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119717888 unmapped: 29859840 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119717888 unmapped: 29859840 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119717888 unmapped: 29859840 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119717888 unmapped: 29859840 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 29851648 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 29851648 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 29851648 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 29851648 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 29851648 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 29851648 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 29851648 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 29851648 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 29851648 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 29851648 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 29851648 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 29851648 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 29843456 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119742464 unmapped: 29835264 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119742464 unmapped: 29835264 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119742464 unmapped: 29835264 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 ms_handle_reset con 0x559f2f76c400 session 0x559f2cadcf00
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119742464 unmapped: 29835264 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119742464 unmapped: 29835264 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119742464 unmapped: 29835264 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119742464 unmapped: 29835264 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119742464 unmapped: 29835264 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119750656 unmapped: 29827072 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119767040 unmapped: 29810688 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119767040 unmapped: 29810688 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119767040 unmapped: 29810688 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119767040 unmapped: 29810688 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119767040 unmapped: 29810688 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119767040 unmapped: 29810688 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119775232 unmapped: 29802496 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119775232 unmapped: 29802496 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119775232 unmapped: 29802496 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119775232 unmapped: 29802496 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119775232 unmapped: 29802496 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119775232 unmapped: 29802496 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 29794304 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 29786112 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 29777920 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 29769728 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 29769728 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 29769728 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 29769728 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 29769728 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 29769728 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 29769728 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 29769728 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 29769728 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 29769728 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 29769728 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204353 data_alloc: 218103808 data_used: 180224
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 29769728 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x914274/0x9da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119873536 unmapped: 29704192 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: do_command 'config diff' '{prefix=config diff}'
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: do_command 'config show' '{prefix=config show}'
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: do_command 'counter dump' '{prefix=counter dump}'
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: do_command 'counter schema' '{prefix=counter schema}'
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 119857152 unmapped: 29720576 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: prioritycache tune_memory target: 4294967296 mapped: 120184832 unmapped: 29392896 heap: 149577728 old mem: 2845415832 new mem: 2845415832
Oct  8 06:30:43 np0005475493 ceph-osd[81751]: do_command 'log dump' '{prefix=log dump}'
Oct  8 06:30:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:43.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:43 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17742 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:43 np0005475493 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  8 06:30:43 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27881 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:43 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27896 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  8 06:30:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2541047330' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  8 06:30:43 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1389: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:30:43 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27577 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:43 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17760 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:43 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27911 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:43 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct  8 06:30:43 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/264219819' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  8 06:30:43 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:43 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:43 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:43.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:43 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27589 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:30:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:30:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:30:44 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:44 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:30:44 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27932 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:44 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17778 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Oct  8 06:30:44 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/329537465' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct  8 06:30:44 np0005475493 nova_compute[262220]: 2025-10-08 10:30:44.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:30:44 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27938 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:44 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27941 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:44 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17799 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:44 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:30:44 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27631 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:44 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17811 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:44 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27962 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:45.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:45 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Oct  8 06:30:45 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3305297458' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct  8 06:30:45 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27652 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:45 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17826 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:45 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1390: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:30:45 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Oct  8 06:30:45 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27664 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:45 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1028679134' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct  8 06:30:45 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17841 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:45 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-mgr-compute-0-ixicfj[73865]: ::ffff:192.168.122.100 - - [08/Oct/2025:10:30:45] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:30:45 np0005475493 ceph-mgr[73869]: [prometheus INFO cherrypy.access.140329065631600] ::ffff:192.168.122.100 - - [08/Oct/2025:10:30:45] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct  8 06:30:45 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:45 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:30:45 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:45.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:30:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Oct  8 06:30:46 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/945813633' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct  8 06:30:46 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17853 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Oct  8 06:30:46 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2181162596' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct  8 06:30:46 np0005475493 nova_compute[262220]: 2025-10-08 10:30:46.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:30:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Oct  8 06:30:46 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1706432708' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct  8 06:30:46 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Oct  8 06:30:46 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3286908895' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct  8 06:30:46 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27685 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:30:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:47.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:30:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Oct  8 06:30:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1968975931' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct  8 06:30:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:47.275Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct  8 06:30:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:47.275Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:30:47 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:47.275Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct  8 06:30:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Oct  8 06:30:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3783763333' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct  8 06:30:47 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1391: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:30:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Oct  8 06:30:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/838008458' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct  8 06:30:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Oct  8 06:30:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/616920574' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct  8 06:30:47 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:47 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:47 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:47.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Optimize plan auto_2025-10-08_10:30:47
Oct  8 06:30:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  8 06:30:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] do_upmap
Oct  8 06:30:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] pools ['.mgr', 'volumes', 'cephfs.cephfs.data', '.nfs', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'images', 'vms', 'cephfs.cephfs.meta', 'backups', '.rgw.root']
Oct  8 06:30:47 np0005475493 ceph-mgr[73869]: [balancer INFO root] prepared 0/10 upmap changes
Oct  8 06:30:47 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  8 06:30:47 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='mgr.14700 192.168.122.100:0/293748209' entity='mgr.compute-0.ixicfj' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  8 06:30:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:30:47 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:30:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct  8 06:30:48 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2984381003' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  8 06:30:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Oct  8 06:30:48 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3753562022' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] scanning for idle connections..
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [volumes INFO mgr_util] cleaning up connections: []
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] _maybe_adjust
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  8 06:30:48 np0005475493 systemd[1]: Starting Hostname Service...
Oct  8 06:30:48 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Oct  8 06:30:48 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2351482930' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct  8 06:30:48 np0005475493 systemd[1]: Started Hostname Service.
Oct  8 06:30:48 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-alertmanager-compute-0[103440]: ts=2025-10-08T10:30:48.915Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  8 06:30:48 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17964 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  8 06:30:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  8 06:30:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:48 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  8 06:30:49 np0005475493 ceph-787292cc-8154-50c4-9e00-e9be3e817149-nfs-cephfs-2-0-compute-0-uynkmx[278592]: 08/10/2025 10:30:49 : epoch 68e63a1a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  8 06:30:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Oct  8 06:30:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3087641911' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct  8 06:30:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:30:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:49.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:30:49 np0005475493 nova_compute[262220]: 2025-10-08 10:30:49.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:30:49 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1392: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  8 06:30:49 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.28127 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:49 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.17982 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Oct  8 06:30:49 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3196912211' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct  8 06:30:49 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  8 06:30:49 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.28142 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:49 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:49 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:49 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:49.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:49 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.28151 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:49 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.18003 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:49 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.28163 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:50 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.18021 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:50 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.28184 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:50 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Oct  8 06:30:50 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/686497220' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct  8 06:30:50 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.18033 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:50 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.28208 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:50 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27850 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:50 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Oct  8 06:30:50 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3054226017' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct  8 06:30:50 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27856 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:51.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  8 06:30:51 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.28229 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:51 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.18045 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:51 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27868 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:51 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1393: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:30:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Oct  8 06:30:51 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4072704330' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct  8 06:30:51 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27877 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:51 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.28244 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:51 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.18060 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:51 np0005475493 nova_compute[262220]: 2025-10-08 10:30:51.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  8 06:30:51 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27883 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:51 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:51 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:30:51 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:51.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:30:51 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Oct  8 06:30:51 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2621856933' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct  8 06:30:51 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.28259 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  8 06:30:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  8 06:30:52 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.18084 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:52 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27895 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  8 06:30:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  8 06:30:52 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.18117 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:52 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27919 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:52 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Oct  8 06:30:52 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1187066560' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct  8 06:30:52 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27952 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  8 06:30:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.102 - anonymous [08/Oct/2025:10:30:53.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  8 06:30:53 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.18147 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:53 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.28328 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  8 06:30:53 np0005475493 ceph-mgr[73869]: log_channel(audit) log [DBG] : from='client.27958 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  8 06:30:53 np0005475493 ceph-mgr[73869]: log_channel(cluster) log [DBG] : pgmap v1394: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  8 06:30:53 np0005475493 ceph-mon[73572]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Oct  8 06:30:53 np0005475493 ceph-mon[73572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3982089414' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct  8 06:30:53 np0005475493 radosgw[88577]: ====== starting new request req=0x7f162da8e5d0 =====
Oct  8 06:30:53 np0005475493 radosgw[88577]: ====== req done req=0x7f162da8e5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  8 06:30:53 np0005475493 radosgw[88577]: beast: 0x7f162da8e5d0: 192.168.122.100 - anonymous [08/Oct/2025:10:30:53.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
